query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
7a16ca9a56c45dac8e09a3e8696979e4
|
Mobile augmented reality for cultural heritage: A technology acceptance study
|
[
{
"docid": "70f35b19ba583de3b9942d88c94b9148",
"text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.",
"title": ""
}
] |
[
{
"docid": "8538dea1bed2a699e99e5d89a91c5297",
"text": "Friction is primary disturbance in motion control. Different types of friction cause diminution of original torque in a DC motor, such as static friction, viscous friction etc. By some means if those can be determined and compensated, the friction effect from the DC motor can be neglected. It would be a great advantage for control systems. Authors have determined the types of frictions as well as frictional coefficients and suggested a unique way of compensating the friction in a DC motor using Disturbance Observer Method which is used to determine the disturbance torques acting on a DC motor. In simulation approach, the method is modelled using MATLAB and the results have been obtained and analysed. The block diagram consists with DC motor model with DOB and RTOB. Practical approach of the implemented block diagram is shown by the obtained results. It is discussed the possibility of applying this to real life applications.",
"title": ""
},
{
"docid": "a854ee8cf82c4bd107e93ed0e70ee543",
"text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.",
"title": ""
},
{
"docid": "7489989ecaa16bc699949608f9ffc8a1",
"text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0f37f7306f879ca0b5d35516a64818fb",
"text": "Much of empirical corporate finance focuses on sources of the demand for various forms of capital, not the supply. Recently, this has changed. Supply effects of equity and credit markets can arise from a combination of three ingredients: investor tastes, limited intermediation, and corporate opportunism. Investor tastes when combined with imperfectly competitive intermediaries lead prices and interest rates to deviate from fundamental values. Opportunistic firms respond by issuing securities with high prices and investing the proceeds. A link between capital market prices and corporate finance can in principle come from either supply or demand. This framework helps to organize empirical approaches that more precisely identify and quantify supply effects through variation in one of these three ingredients. Taken as a whole, the evidence shows that shifting equity and credit market conditions play an important role in dictating corporate finance and investment. 181 A nn u. R ev . F in . E co n. 2 00 9. 1: 18 120 5. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by H ar va rd U ni ve rs ity o n 02 /1 1/ 14 . F or p er so na l u se o nl y.",
"title": ""
},
{
"docid": "3ac1ceb1656f4ede34e417d17df41b9e",
"text": "We study the problem of link prediction in coupled networks, where we have the structure information of one (source) network and the interactions between this network and another (target) network. The goal is to predict the missing links in the target network. The problem is extremely challenging as we do not have any information of the target network. Moreover, the source and target networks are usually heterogeneous and have different types of nodes and links. How to utilize the structure information in the source network for predicting links in the target network? How to leverage the heterogeneous interactions between the two networks for the prediction task?\n We propose a unified framework, CoupledLP, to solve the problem. Given two coupled networks, we first leverage atomic propagation rules to automatically construct implicit links in the target network for addressing the challenge of target network incompleteness, and then propose a coupled factor graph model to incorporate the meta-paths extracted from the coupled part of the two networks for transferring heterogeneous knowledge. We evaluate the proposed framework on two different genres of datasets: disease-gene (DG) and mobile social networks. In the DG networks, we aim to use the disease network to predict the associations between genes. In the mobile networks, we aim to use the mobile communication network of one mobile operator to infer the network structure of its competitors. On both datasets, the proposed CoupledLP framework outperforms several alternative methods. The proposed problem of coupled link prediction and the corresponding framework demonstrate both the scientific and business applications in biology and social networks.",
"title": ""
},
{
"docid": "fb214dfd39c4fef19b6598b3b78a1730",
"text": "Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data-driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of n-grams that appear in the text. We explore the trade-off between accuracy and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is preferred to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to assigning location data to short social media texts, and offer implications for all applications that use data-driven approaches to locate content.",
"title": ""
},
{
"docid": "5ba3baabc84d02f0039748a4626ace36",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "4c0869847079b11ec8e0a6b9714b2d09",
"text": "This paper provides a tutorial overview of the latest generation of passive optical network (PON) technology standards nearing completion in ITU-T. The system is termed NG-PON2 and offers a fiber capacity of 40 Gbit/s by exploiting multiple wavelengths at dense wavelength division multiplexing channel spacing and tunable transceiver technology in the subscriber terminals (ONUs). Here, the focus is on the requirements from network operators that are driving the standards developments and the technology selection prior to standardization. A prestandard view of the main physical layer optical specifications is also given, ahead of final ITU-T approval.",
"title": ""
},
{
"docid": "d688eb5e3ef9f161ef6593a406db39ee",
"text": "Counting codes makes qualitative content analysis a controversial approach to analyzing textual data. Several decades ago, mainstream content analysis rejected qualitative content analysis on the grounds that it was not sufficiently quantitative; today, it is often charged with not being sufficiently qualitative. This article argues that qualitative content analysis is distinctively qualitative in both its approach to coding and its interpretations of counts from codes. Rather than argue over whether to do qualitative content analysis, researchers must make informed decisions about when to use it in analyzing qualitative data.",
"title": ""
},
{
"docid": "aef8b4098ade89a3218e01d15de01063",
"text": "This paper studies multidimensional matching between workers and jobs. Workers differ in manual and cognitive skills and sort into jobs that demand different combinations of these two skills. To study this multidimensional sorting, I develop a theoretical framework that generalizes the unidimensional notion of assortative matching. I derive the equilibrium in closed form and use this explicit solution to study biased technological change. The key finding is that an increase of worker-job complementarities in cognitive relative to manual inputs leads to more pronounced sorting and wage inequality across cognitive relative to manual skills. This can trigger wage polarization and boost aggregate wage dispersion. I then estimate the model for the US and identify sizeable technology shifts: During the 90s, worker-job complementarities in cognitive inputs increased by 15% whereas complementarities in manual inputs decreased by 41%. Besides this bias in complementarities, there has also been a strong cognitive skill -bias in production. Counterfactual exercises suggest that these technology shifts can account for observed changes in worker-job sorting, wage polarization and a significant part of the increase in US wage dispersion.",
"title": ""
},
{
"docid": "357cf7b1c7d88b3d7e40d72f43975bf8",
"text": "While most work in sentiment analysis in the financial domain has focused on the use of content from traditional finance news, in this work we concentrate on more subjective sources of information, blogs. We aim to automatically determine the sentiment of financial bloggers towards companies and their stocks. To do this we develop a corpus of financial blogs, annotated with polarity of sentiment with respect to a number of companies. We conduct an analysis of the annotated corpus, from which we show there is a significant level of topic shift within this collection, and also illustrate the difficulty that human annotators have when annotating certain sentiment categories. To deal with the problem of topic shift within blog articles, we propose text extraction techniques to create topic-specific sub-documents, which we use to train a sentiment classifier. We show that such approaches provide a substantial improvement over full documentclassification and that word-based approaches perform better than sentence-based or paragraph-based approaches.",
"title": ""
},
{
"docid": "06465bde1eb562e90e609a31ed2dfe70",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.",
"title": ""
},
{
"docid": "724aa202f9ba9cbc0febfa858eb28f58",
"text": "in the most of the modern cities it is difficult and expensive to create more parking spaces for vehicles since the numbers of vehicles are running on the road are increasing day by day and the count of the free spaces in the cities are the same. This problem leads to congestion for parking seekers and drivers. To develop an IoT framework that targets Parking Management which is biggest challenges in modern cities. Using embedded systems, there is a chance to develop an application which can solve these problems. The proposed smart parking solution gives an onsite deployment in which, IoT application monitors and indicate the availability of each parking space. This system helps in improvising the management of parking system by following rules of the government, for example handling different parking spaces in the city. The intuition of presenting this paper is to reduce smart city issue such as the traffic on road and reduces the pollution in the city and the parking. Keywords— Cloud; Internet of Things; MQTT; Raspberry pi; Parking",
"title": ""
},
{
"docid": "567f27921ee05e125806db1d75460e77",
"text": "Face caricatures are widely used in political cartoons and generating caricatures from images has become a popular research topic recently. The main challenge lies in achieving nice artistic effect and capturing face characteristics by exaggerating the most featured parts while keeping the resemblance to the original image. In this paper, a sketch-based face caricature synthesis framework is proposed to generate and exaggerate the face caricature from a single near-frontal picture. We first present an effective and robust face component rendering method using Adaptive Thresholding to eliminate the influence of illumination by separating face components into layers. Then, we propose an automatic exaggeration method, in which face component features are trained using Support Vector Machine (SVM) and then amplified using image processing techniques to make the caricature more hilarious and thus more impressive. After that, a hair rendering method is presented, which synthesizes hair in the same caricature style using edge-detection techniques. Practical results show that the synthesized face caricatures are of great artistic effect and well characterized, and our method is robust and efficient even under unfavorable lighting conditions.",
"title": ""
},
{
"docid": "57ae9e49c5a5e323f2461ac7c069c504",
"text": "Internet of Medical Things (IOMT) is playing vital role in healthcare industry to increase the accuracy, reliability and productivity of electronic devices. Researchers are contributing towards a digitized healthcare system by interconnecting the available medical resources and healthcare services. As IOT converge various domains but our focus is related to research contribution of IOT in healthcare domain. This paper presents the peoples contribution of IOT in healthcare domain, application and future challenges of IOT in term of medical services in healthcare. We do hope that this work will be useful for researchers and practitioners in the field, helping them to understand the huge potential of IoT in medical domain and identification of major challenges in IOMT. This work will also help the researchers to understand applications of IOT in healthcare domain. This contribution will help the researchers to understand the previous contribution of IOT in healthcare industry.",
"title": ""
},
{
"docid": "6ffcdaafcda083517bbfe4fa06f5df87",
"text": "This paper reports a qualitative study designed to investigate the issues of cybersafety and cyberbullying and report how students are coping with them. Through discussion with 74 students, aged from 10 to 17, in focus groups divided into three age levels, data were gathered in three schools in Victoria, Australia, where few such studies had been set. Social networking sites and synchronous chat sites were found to be the places where cyberbullying most commonly occurred, with email and texting on mobile phones also used for bullying. Grades 8 and 9 most often reported cyberbullying and also reported behaviours and internet contacts that were cybersafety risks. Most groups preferred to handle these issues themselves or with their friends rather then alert parents and teachers who may limit their technology access. They supported education about these issues for both adults and school students and favoured a structured mediation group of their peers to counsel and advise victims.",
"title": ""
},
{
"docid": "574aca6aa63dd17949fcce6a231cf2d3",
"text": "This paper presents an algorithm for segmenting the hair region in uncontrolled, real life conditions images. Our method is based on a simple statistical hair shape model representing the upper hair part. We detect this region by minimizing an energy which uses active shape and active contour. The upper hair region then allows us to learn the hair appearance parameters (color and texture) for the image considered. Finally, those parameters drive a pixel-wise segmentation technique that yields the desired (complete) hair region. We demonstrate the applicability of our method on several real images.",
"title": ""
},
{
"docid": "a62e6c9f37d4193eb5ec1f5f4a5af4e8",
"text": "Computer viruses have become the main threat of the safety and security of industry. Unfortunately, no mature products of anti-virus can protect computers effectively. This paper presents an approach of virus detection which is based on analysis and distilling of representative behavior characteristic and systemic description of the suspicious behaviors indicated by the sequences of APIs which called under Windows. Based on decompilation analysis, according to the determinant of Bayes Algorithm, and by the validation of abundant sample space, the technique implements the virus detection by suspicious behavior identification.",
"title": ""
},
{
"docid": "9b791932b6f2cdbbf0c1680b9a610614",
"text": "To survive in today’s global marketplace, businesses need to be able to deliver products on time, maintain market credibility and introduce new products and services faster than competitors. This is especially crucial to the Smalland Medium-sized Enterprises (SMEs). Since the emergence of the Internet, it has allowed SMEs to compete effectively and efficiently in both domestic and international market. Unfortunately, such leverage is often impeded by the resistance and mismanagement of SMEs to adopt Electronic Commerce (EC) proficiently. Consequently, this research aims to investigate how SMEs can adopt and implement EC successfully to achieve competitive advantage. Building on an examination of current technology diffusion literature, a model of EC diffusion has been developed. It investigates the factors that influence SMEs in the adoption of EC, followed by an examination in the diffusion process, which SMEs adopt to integrate EC into their business systems.",
"title": ""
},
{
"docid": "916f6f0942a08501139f6d4d1750816d",
"text": "The development of local anesthesia in dentistry has marked the beginning of a new era in terms of pain control. Lignocaine is the most commonly used local anesthetic (LA) agent even though it has a vasodilative effect and needs to be combined with adrenaline. Centbucridine is a non-ester, non amide group LA and has not been comprehensively studied in the dental setting and the objective was to compare it to Lignocaine. This was a randomized study comparing the onset time, duration, depth and cardiovascular parameters between Centbucridine (0.5%) and Lignocaine (2%). The study was conducted in the dental outpatient department at the Government Dental College in India on patients attending for the extraction of lower molars. A total of 198 patients were included and there were no significant differences between the LAs except those who received Centbucridine reported a significantly longer duration of anesthesia compared to those who received Lignocaine. None of the patients reported any side effects. Centbucridine was well tolerated and its substantial duration of anesthesia could be attributed to its chemical compound. Centbucridine can be used for dental procedures and can confidently be used in patients who cannot tolerate Lignocaine or where adrenaline is contraindicated.",
"title": ""
}
] |
scidocsrr
|
1934322dd7b5d1353346240e6e312319
|
Estimating the Spatial Distribution of Crime Events around a Football Stadium from Georeferenced Tweets
|
[
{
"docid": "c39fe902027ba5cb5f0fa98005596178",
"text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: msg8u@virginia.edu; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.",
"title": ""
}
] |
[
{
"docid": "0745e61d3c62c569821382aa3d3dae9e",
"text": "Air pollutants can be either gases or aerosols with particles or liquid droplets suspended in the air. They change the natural composition of the atmosphere, can be harmful to humans and other living species and can cause damage to natural water bodies and the land. Anthropogenic specifically due to the human causes that in this study, it has been identified that Population, Gross Domestic Product (GDP) and Manufacturing Industry adaptive from IPAT Model is the major contributors to the emission of carbon dioxide. The time series data gained of carbon emission from the years 1970 to 2011 to explain the trend. The Command and Control (CAC) and Economic Incentive (EI) approaches being recommended to assist the government monitoring the air pollution trend in Malaysia",
"title": ""
},
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "60ad412d0d6557d2a06e9914bbf3c680",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fd6d2a4376796c09fb2def7af4dc178f",
"text": "Background subtraction is a powerful mechanism for detecting change in a sequence of images that finds many applications. The most successful background subtraction methods apply probabilistic models to background intensities evolving in time; nonparametric and mixture-of-Gaussians models are but two examples. The main difficulty in designing a robust background subtraction algorithm is the selection of a detection threshold. In this paper, we adapt this threshold to varying video statistics by means of two statistical models. In addition to a nonparametric background model, we introduce a foreground model based on small spatial neighborhood to improve discrimination sensitivity. We also apply a Markov model to change labels to improve spatial coherence of the detections. The proposed methodology is applicable to other background models as well.",
"title": ""
},
{
"docid": "3f48327ca2125df3a6da0c1e54131013",
"text": "Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach.",
"title": ""
},
{
"docid": "7f3c6e8f0915160bbc9feba4d2175fb3",
"text": "Memory leaks are major problems in all kinds of applications, depleting their performance, even if they run on platforms with automatic memory management, such as Java Virtual Machine. In addition, memory leaks contribute to software aging, increasing the complexity of software maintenance. So far memory leak detection was considered to be a part of development process, rather than part of software maintenance. To detect slow memory leaks as a part of quality assurance process or in production environments statistical approach for memory leak detection was implemented and deployed in a commercial tool called Plumbr. It showed promising results in terms of leak detection precision and recall, however, even better detection quality was desired. To achieve this improvement goal, classification algorithms were applied to the statistical data, which was gathered from customer environments where Plumbr was deployed. This paper presents the challenges which had to be solved, method that was used to generate features for supervised learning and the results of the corresponding experiments.",
"title": ""
},
{
"docid": "10e92b73fcd1b89e820dc0cdfac1b70f",
"text": "With an aim of provisioning fast, reliable and low cost services to the users, the cloud-computing technology has progressed leaps and bounds. But, adjacent to its development is ever increasing ability of malicious users to compromise its security from outside as well as inside. The Network Intrusion Detection System (NIDS) techniques has gone a long way in detection of known and unknown attacks. The methods of detection of intrusion and deployment of NIDS in cloud environment are dependent on the type of services being rendered by the cloud. It is also important that the cloud administrator is able to determine the malicious intensions of the attackers and various methods of attack. In this paper, we carry out the integration of NIDS module and Honeypot Networks in Cloud environment with objective to mitigate the known and unknown attacks. We also propose method to generate and update signatures from information derived from the proposed integrated model. Using sandboxing environment, we perform dynamic malware analysis of binaries to derive conclusive evidence of malicious attacks.",
"title": ""
},
{
"docid": "7e10aa210d6985d757a21b8b6c49ae53",
"text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t",
"title": ""
},
{
"docid": "b691e07aa54a48fe7e3a2c6f6cf3754a",
"text": "We study fundamental aspects related to the efficient processing of the SPARQL query language for RDF, proposed by the W3C to encode machine-readable information in the Semantic Web. Our key contributions are (i) a complete complexity analysis for all operator fragments of the SPARQL query language, which -- as a central result -- shows that the SPARQL operator Optional alone is responsible for the PSpace-completeness of the evaluation problem, (ii) a study of equivalences over SPARQL algebra, including both rewriting rules like filter and projection pushing that are well-known from relational algebra optimization as well as SPARQL-specific rewriting schemes, and (iii) an approach to the semantic optimization of SPARQL queries, built on top of the classical chase algorithm. While studied in the context of a theoretically motivated set semantics, almost all results carry over to the official, bag-based semantics and therefore are of immediate practical relevance.",
"title": ""
},
{
"docid": "4be06c4cdaa0a8c1a9cbcf200527e014",
"text": "BACKGROUND\nKrathom is currently the most popular illicit substance in use in southern Thailand. Research regarding its effects and health impacts is scarce. This study explored the pattern of krathom use and users' perceptions of the consequences of its use.\n\n\nMETHODS\nAn in-depth qualitative interview. A group of 34 self-identified regular users, occasional users, non-users and ex-users of krathom was used in this study. Health volunteer as a key-contact person helped the researcher to invite villagers to participate in the study using snowballing technique. The process of data analysis was guided by Strauss and Corbin's grounded theory.\n\n\nRESULTS\nThe core category, 'Understanding krathom use', was generated from three inter-related categories: (i) reasons for continuing krathom use, (ii) the way of applying krathom, and (iii) perceiving positive and realizing the negative effects of krathom use and their 18 subcategories.\n\n\nCONCLUSIONS\nThe study findings reveal the importance of considering krathom use from the perspective and belief of the villagers. Krathom is addictive with its own characteristic symptoms and signs. The results provide support for policy interventions to control the availability of krathom according to the community context. In addition, krathom misuse by adolescents must be considered.",
"title": ""
},
{
"docid": "fa68493c999a154dfc8638aa27255e93",
"text": "We develop a kernel density estimation method for estimating the density of points on a network and implement the method in the GIS environment. This method could be applied to, for instance, finding 'hot spots' of traffic accidents, street crimes or leakages in gas and oil pipe lines. We first show that the application of the ordinary two-dimensional kernel method to density estimation on a network produces biased estimates. Second, we formulate a 'natural' extension of the univariate kernel method to density estimation on a network, and prove that its estimator is biased; in particular, it overestimates the densities around nodes. Third, we formulate an unbiased discontinuous kernel function on a network, and fourth, an unbiased continuous kernel function on a network. Fifth, we develop computational methods for these kernels and derive their computational complexity. We also develop a plug-in tool for operating these methods in the GIS environment. Sixth, an application of the proposed methods to the density estimation of bag-snatches on streets is illustrated. Lastly, we summarize the major results and describe some suggestions for the practical use of the proposed methods.",
"title": ""
},
{
"docid": "8b55e38d4779b6c1deaa8e00ff76812b",
"text": "We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven’s piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.",
"title": ""
},
{
"docid": "a049749849761dc4cd65d4442fd135f8",
"text": "Local classifiers are sometimes called lazy learners because they do not train a classifier until presented with a test sample. However, such methods are generally not completely lazy because the neighborhood size k (or other locality parameter) is usually chosen by cross validation on the training set, which can require significant preprocessing and risks overfitting. We propose a simple alternative to cross validation of the neighborhood size that requires no preprocessing: instead of committing to one neighborhood size, average the discriminants for multiple neighborhoods. We show that this forms an expected estimated posterior that minimizes the expected Bregman loss with respect to the uncertainty about the neighborhood choice. We analyze this approach for six standard and state-of-the-art local classifiers, including discriminative adaptive metric kNN (DANN), a local support vector machine (SVM-KNN), hyperplane distance nearest neighbor (HKNN), and a new local Bayesian quadratic discriminant analysis (local BDA). The empirical effectiveness of this technique versus cross validation is confirmed with experiments on seven benchmark data sets, showing that similar classification performance can be attained without any training.",
"title": ""
},
{
"docid": "7a3c965719e15d5afd6da28c12a78b01",
"text": "Prevalence of suicide attempts, self-injurious behaviors, and associated psychosocial factors were examined in a clinical sample of transgender (TG) adolescents and emerging adults (n = 96). Twenty-seven (30.3%) TG youth reported a history of at least one suicide attempt and 40 (41.8%) reported a history of self-injurious behaviors. There was a higher frequency of suicide attempts in TG youth with a desire for weight change, and more female-to-male youth reported a history of suicide attempts and self-harm behaviors than male-to-female youth. Findings indicate that this population is at a high risk for psychiatric comorbidities and life-threatening behaviors.",
"title": ""
},
{
"docid": "a2a8f1011606de266c3b235f31f95bee",
"text": "In this paper, we look at three different methods of extracting the argumentative structure from a piece of natural language text. These methods cover linguistic features, changes in the topic being discussed and a supervised machine learning approach to identify the components of argumentation schemes, patterns of human reasoning which have been detailed extensively in philosophy and psychology. For each of these approaches we achieve results comparable to those previously reported, whilst at the same time achieving a more detailed argument structure. Finally, we use the results from these individual techniques to apply them in combination, further improving the argument structure identification.",
"title": ""
},
{
"docid": "23737f898d9b50ff7741096a59054759",
"text": "We present a new method for speech denoising and robust speech recognition. Using the framework of probabilistic models allows us to integrate detailed speech models and models of realistic non-stationary noise signals in a principled manner. The framework transforms the denoising problem into a problem of Bayes-optimal signal estimation, producing minimum mean square error estimators of desired features of clean speech from noisy data. We describe a fast and efficient implementation of an algorithm that computes these estimators. The effectiveness of this algorithm is demonstrated in robust speech recognition experiments, using the Wall Street Journal speech corpus and Microsoft Whisper large-vocabulary continuous speech recognizer. Results show significantly lower word error rates than those under noisy-matched condition. In particular, when the denoising algorithm is applied to the noisy training data and subsequently the recognizer is retrained, very low error rates are obtained.",
"title": ""
},
{
"docid": "5ab4bb5923bf589436651783a6627a0d",
"text": "A capacity fade prediction model has been developed for Li-ion cells based on a semi-empirical approach. Correlations for variation of capacity fade parameters with cycling were obtained with two different approaches. The first approach takes into account only the active material loss, while the second approach includes rate capability losses too. Both methods use correlations for variation of the film resistance with cycling. The state of charge (SOC) of the limiting electrode accounts for the active material loss. The diffusion coefficient of the limiting electrode was the parameter to account for the rate capability losses during cycling. © 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8c8120beecf9086f3567083f89e9dfa2",
"text": "This thesis studies the problem of product name recognition from short product descriptions. This is an important problem especially with the increasing use of ERP (Enterprise Resource Planning) software at the core of modern business management systems, where the information of business transactions is stored in unstructured data stores. A solution to the problem of product name recognition is especially useful for the intermediate businesses as they are interested in finding potential matches between the items in product catalogs (produced by manufacturers or another intermediate business) and items in the product requests (given by the end user or another intermediate business). In this context the problem of product name recognition is specifically challenging because product descriptions are typically short, ungrammatical, incomplete, abbreviated and multilingual. In this thesis we investigate the application of supervised machine-learning techniques and gazetteer-based techniques to our problem. To approach the problem, we define it as a classification problem where the tokens of product descriptions are classified into I, O and B classes according to the standard IOB tagging scheme. Next we investigate and compare the performance of a set of hybrid solutions that combine machine learning and gazetteer-based approaches. We study a solution space that uses four learning models: linear and non-linear SVC, Random Forest, and AdaBoost. For each solution, we use the same set of features. We divide the features into four categories: token-level features, documentlevel features, gazetteer-based features and frequency-based features. Moreover, we use automatic feature selection to reduce the dimensionality of data; that consequently improves the training efficiency and avoids over-fitting. To be able to evaluate the solutions, we develop a machine learning framework that takes as its inputs a list of predefined solutions (i.e. our solution space) and a preprocessed labeled dataset (i.e. a feature vector X, and a corresponding class label vector Y). It automatically selects the optimal number of most relevant features, optimizes the hyper-parameters of the learning models, trains the learning models, and evaluates the solution set. We believe that our automated machine learning framework can effectively be used as an AutoML framework that automates most of the decisions that have to be made in the design process of a machine learning",
"title": ""
},
{
"docid": "a2a633c972cb84d9b7d27e347bb59cfa",
"text": "This study investigated three-dimensional (3D) texture as a possible diagnostic marker of Alzheimer’s disease (AD). T1-weighted magnetic resonance (MR) images were obtained from 17 AD patients and 17 age and gender-matched healthy controls. 3D texture features were extracted from the circular 3D ROIs placed using a semi-automated technique in the hippocampus and entorhinal cortex. We found that classification accuracies based on texture analysis of the ROIs varied from 64.3% to 96.4% due to different ROI selection, feature extraction and selection options, and that most 3D texture features selected were correlated with the mini-mental state examination (MMSE) scores. The results indicated that 3D texture could detect the subtle texture differences between tissues in AD patients and normal controls, and texture features of MR images in the hippocampus and entorhinal cortex might be related to the severity of AD cognitive impairment. These results suggest that 3D texture might be a useful aid in AD diagnosis.",
"title": ""
},
{
"docid": "49a525fd20a4d53b17619e1c81696fce",
"text": "Patients irradiated for left-sided breast cancer have higher incidence of cardiovascular disease than those receiving irradiation for right-sided breast cancer. Most abnormalities were in the left anterior descending (LAD) coronary artery territory. We analyzed the relationships between preoperative examination results and irradiation dose to the LAD artery in patients with left-sided breast cancer. Seventy-one patients receiving breast radiotherapy were analyzed. The heart may rotate around longitudinal axis, showing either clockwise or counterclockwise rotation (CCWR). On electrocardiography, the transition zone (TZ) was judged in precordial leads. CCWR was considered to be present if TZ was at or to the right of V3. The prescribed dose was 50 Gy in 25 fractions. The maximum (Dmax) and mean (Dmean) doses to the LAD artery and the volumes of the LAD artery receiving at least 20 Gy, 30 Gy and 40 Gy (V20Gy, V30Gy and V40Gy, respectively) were significantly higher in CCWR than in the non-CCWR patients. On multivariate analysis, TZ was significantly associated with Dmax, Dmean, V20Gy, V30Gy, and V40Gy. CCWR is a risk factor for high-dose irradiation to the LAD artery. Electrocardiography is useful for evaluating the cardiovascular risk of high-dose irradiation to the LAD artery.",
"title": ""
}
] |
scidocsrr
|
5668ae6929813bd46500197605a6f1f2
|
Trust and Reputation Systems
|
[
{
"docid": "f3a044835e9cbd0c13218ab0f9c06dd1",
"text": "Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface.",
"title": ""
},
{
"docid": "0a97c254e5218637235a7e23597f572b",
"text": "We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system.",
"title": ""
}
] |
[
{
"docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4",
"text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.",
"title": ""
},
{
"docid": "305cfc6824ec7ac30a08ade2fff66c13",
"text": "Psychological research has shown that 'peak-end' effects influence people's retrospective evaluation of hedonic and affective experience. Rather than objectively reviewing the total amount of pleasure or pain during an experience, people's evaluation is shaped by the most intense moment (the peak) and the final moment (end). We describe an experiment demonstrating that peak-end effects can influence a user's preference for interaction sequences that are objectively identical in their overall requirements. Participants were asked to choose which of two interactive sequences of five pages they preferred. Both sequences required setting a total of 25 sliders to target values, and differed only in the distribution of the sliders across the five pages -- with one sequence intended to induce positive peak-end effects, the other negative. The study found that manipulating only the peak or the end of the series did not significantly change preference, but that a combined manipulation of both peak and end did lead to significant differences in preference, even though all series had the same overall effort.",
"title": ""
},
{
"docid": "1cd77d97f27b45d903ffcecda02795a5",
"text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.",
"title": ""
},
{
"docid": "0dd78cb46f6d2ddc475fd887a0dc687c",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "ee0ba4a70bfa4f53d33a31b2d9063e89",
"text": "Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poisson-based models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multisecond scales, we find a distinctive piecewise-linear nonstationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation",
"title": ""
},
{
"docid": "3d0c8e3539dd8f5120a404836020133d",
"text": "Regenerative braking system is the own system of electric and hybrid electric vehicle. The system can restore the kinetic energy and potential energy, used during start and accelerating, into battery through electrical machine. The total brake force is composed of friction brake force on front axel, friction brake force on rear axel and regenerative brake force when a vehicle equipped with regenerative braking system brakes. A control strategy, parallel regenerative brake strategy, was proposed to resolve the distribution of the three forces. The parallel regenerative brake strategy was optimized on Saturn SL1 and then simulated under urban 15 drive cycle. As a result, through optimization the parallel brake strategy is not only safe enough but also can restore the largest amount of the brake energy.",
"title": ""
},
{
"docid": "5b617701a4f2fa324ca7e3e7922ce1c4",
"text": "Open circuit voltage of a silicon solar cell is around 0.6V. A solar module is constructed by connecting a number of cells in series to get a practically usable voltage. Partial shading of a Solar Photovoltaic Module (SPM) is one of the main causes of overheating of shaded cells and reduced energy yield of the module. The present work is a study of harmful effects of partial shading on the performance of a PV module. A PSPICE simulation model that represents 36 cells PV module under partial shaded conditions has been used to test several shading profiles and results are presented.",
"title": ""
},
{
"docid": "7e884438ee8459a441cbe1500f1bac88",
"text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.",
"title": ""
},
{
"docid": "cab56ff19b08af38eb1812a4f3e32d04",
"text": "To ensure security, it is important to build-in security in both the planning and the design phases and adapt a security architecture which makes sure that regular and security related tasks, are deployed correctly. Security requirements must be linked to the business goals. We identified four domains that affect security at an organization namely, organization governance, organizational culture, the architecture of the systems, and service management. In order to identify and explore the strength and weaknesses of particular organization’s security, a wide range model has been developed. This model is proposed as an information security maturity model (ISMM) and it is intended as a tool to evaluate the ability of organizations to meet the objectives of security.",
"title": ""
},
{
"docid": "9a136517edbfce2a7c6b302da9e6c5b7",
"text": "This paper presents our approach to semantic relatedness and textual entailment subtasks organized as task 1 in SemEval 2014. Specifically, we address two questions: (1) Can we solve these two subtasks together? (2) Are features proposed for textual entailment task still effective for semantic relatedness task? To address them, we extracted seven types of features including text difference measures proposed in entailment judgement subtask, as well as common text similarity measures used in both subtasks. Then we exploited the same feature set to solve the both subtasks by considering them as a regression and a classification task respectively and performed a study of influence of different features. We achieved the first and the second rank for relatedness and entailment task respectively.",
"title": ""
},
{
"docid": "7a37df81ad70697549e6da33384b4f19",
"text": "Water scarcity is now one of the major global crises, which has affected many aspects of human health, industrial development and ecosystem stability. To overcome this issue, water desalination has been employed. It is a process to remove salt and other minerals from saline water, and it covers a variety of approaches from traditional distillation to the well-established reverse osmosis. Although current water desalination methods can effectively provide fresh water, they are becoming increasingly controversial due to their adverse environmental impacts including high energy intensity and highly concentrated brine waste. For millions of years, microorganisms, the masters of adaptation, have survived on Earth without the excessive use of energy and resources or compromising their ambient environment. This has encouraged scientists to study the possibility of using biological processes for seawater desalination and the field has been exponentially growing ever since. Here, the term biodesalination is offered to cover all of the techniques which have their roots in biology for producing fresh water from saline solution. In addition to reviewing and categorizing biodesalination processes for the first time, this review also reveals unexplored research areas in biodesalination having potential to be used in water treatment.",
"title": ""
},
{
"docid": "26b1c00522009440c0481453e0f6331c",
"text": "Software organizations that develop their software products using the agile software processes such as Extreme Programming (XP) face a number of challenges in their effort to demonstrate that their process activities conform to ISO 9001 requirements, a major one being product traceability: software organizations must provide evidence of ISO 9001 conformity, and they need to develop their own procedures, tools, and methodologies to do so. This paper proposes an auditing model for ISO 9001 traceability requirements that is applicable in agile (XP) environments. The design of our model is based on evaluation theory, and includes the use of several auditing “yardsticks” derived from the principles of engineering design, the SWEBOK Guide, and the CMMI-DEV guidelines for requirement management and traceability for each yardstick. Finally, five approaches for agile-XP traceability approaches are audited based on the proposed audit model.",
"title": ""
},
{
"docid": "925efe54f311b78ecd83419c1ad0f783",
"text": "Bayesian neural networks (BNNs) allow us to reason about uncertainty in a principled way. Stochastic Gradient Langevin Dynamics (SGLD) enables efficient BNN learning by drawing samples from the BNN posterior using mini-batches. However, SGLD and its extensions require storage of many copies of the model parameters, a potentially prohibitive cost, especially for large neural networks. We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN). At test-time, samples are generated by the GAN. We show that this distillation framework incurs no loss in performance on recent BNN applications including anomaly detection, active learning, and defense against adversarial attacks. By construction, our framework distills not only the Bayesian predictive distribution, but the posterior itself. This allows one to compute quantities such as the approximate model variance, which is useful in downstream tasks. To our knowledge, these are the first results applying MCMC-based BNNs to the aforementioned applications.",
"title": ""
},
{
"docid": "bd7f3decfe769db61f0577a60e39a26f",
"text": "Automated food and drink recognition methods connect to cloud-based lookup databases (e.g., food item barcodes, previously identified food images, or previously classified NIR (Near Infrared) spectra of food and drink items databases) to match and identify a scanned food or drink item, and report the results back to the user. However, these methods remain of limited value if we cannot further reason with the identified food and drink items, ingredients and quantities/portion sizes in a proposed meal in various contexts; i.e., understand from a semantic perspective their types, properties, and interrelationships in the context of a given user’s health condition and preferences. In this paper, we review a number of “food ontologies”, such as the Food Products Ontology/FOODpedia (by Kolchin and Zamula), Open Food Facts (by Gigandet et al.), FoodWiki (Ontology-driven Mobile Safe Food Consumption System by Celik), FOODS-Diabetes Edition (A Food-Oriented Ontology-Driven System by Snae Namahoot and Bruckner), and AGROVOC multilingual agricultural thesaurus (by the UN Food and Agriculture Organization—FAO). These food ontologies, with appropriate modifications (or as a basis, to be added to and further OPEN ACCESS Future Internet 2015, 7 373 expanded) and together with other relevant non-food ontologies (e.g., about diet-sensitive disease conditions), can supplement the aforementioned lookup databases to enable progression from the mere automated identification of food and drinks in our meals to a more useful application whereby we can automatically reason with the identified food and drink items and their details (quantities and ingredients/bromatological composition) in order to better assist users in making the correct, healthy food and drink choices for their particular health condition, age, body weight/BMI (Body Mass Index), lifestyle and preferences, etc.",
"title": ""
},
{
"docid": "f6553bf60969c422a07e1260a35b10c9",
"text": "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.",
"title": ""
},
{
"docid": "4d24a09dcbac1cc33a88bbabc89102d8",
"text": "Streaming data analysis in real time is becoming the fastest and most efficient way to obtain useful knowledge from what is happening now, allowing organizations to react quickly when problems appear or to detect new trends helping to improve their performance. Evolving data streams are contributing to the growth of data created over the last few years. We are creating the same quantity of data every two days, as we created from the dawn of time up until 2003. Evolving data streams methods are becoming a low-cost, green methodology for real time online prediction and analysis. We discuss the current and future trends of mining evolving data streams, and the challenges that the field will have to overcome during the next years.",
"title": ""
},
{
"docid": "011fd6ee57ffd223c0e1a29b3a7ecad1",
"text": "A substrate-integrated waveguide (SIW) H-plane sectoral horn antenna, with significantly improved bandwidth, is presented. A tapered ridge, consists of a simple arrangement of vias on the side flared wall within the multilayer substrate, is introduced to enlarge the operational bandwidth. A simple feed configuration is suggested to provide the propagating wave for the antenna structure. The proposed antenna is simulated by two well-known full wave packages, the Ansoft HFSS and the CST microwave studio. Close agreement between both simulation results is obtained. The designed antenna shows good radiation characteristics and low VSWR, lower than 2.5, for the whole frequency range of 18– 40 GHz.",
"title": ""
},
{
"docid": "c91fe61e7ef90867377940644b566d93",
"text": "The adoption of Learning Management Systems to create virtual learning communities is a unstructured form of allowing collaboration that is rapidly growing. Compared to other systems that structure interactions, these environments provide data of the interaction performed at a very low level. For assessment purposes, this fact poses some difficulties to derive higher lever indicators of collaboration. In this paper we propose to shape the analysis problem as a data mining task. We suggest that the typical data mining cycle bears many resemblances with proposed models for collaboration management. We present some preliminary experiments using clustering to discover patterns reflecting user behaviors. Results are very encouraging and suggest several research directions.",
"title": ""
},
{
"docid": "41c5a41b0bebcdb5b744d0ac9d0ed0f6",
"text": "For research to progress most effectively, we first should establish common ground regarding just what is the problem that imbalanced data sets present to machine learning systems. Why and when should imbalanced data sets be problematic? When is the problem simply an artifact of easily rectified design choices? I will try to pick the low-hanging fruit and share them with the rest of the workshop participants. Specifically, I would like to discuss what the problem is not. I hope this will lead to a profitable discussion of what the problem indeed is, and how it might be addressed most effectively. A common notion in machine learning causes the most basic problem, and indeed often has stymied both research-oriented and practical attempts to learn from imbalanced data sets. Fortunately the problem is straightforward to fix. The stumbling block is the notion that an inductive learner produces a black box that acts as a categorical (e.g., binary) labeling function. Of course, many of our learning algorithms in fact do produce such classifiers, which gets us into trouble when faced with imbalanced class distributions. The assumptions built into (most of) these algorithms are: 1. that maximizing accuracy is the goal, and 2. that, in use, the classifier will operate on data drawn from the same distribution as the training data. The result of these two assumptions is that machine learning on unbalanced data sets produces unsatisfactory classifiers. The reason why should be clear: if 99% of the data are from one class, for most realistic problems a learning algorithm will be hard pressed to do better than the 99% accuracy achievable by the trivial classifier that labels everything with the majority class. Based on the underlying assumptions, this is the intelligent thing to do. It is more striking when one of our algorithms, operating under these assumptions, behaves otherwise. This apparent problem nothwithstanding, it would be premature to conclude that there is a fundamental difficulty with learning from imbalanced data sets. We first must probe deeper and ask whether the algorithms are robust to the weakening of the assumptions that cause the problem. When designing algorithms, some assumptions are fundamental. Changing them would entail redesigning the algorithm completely. Other assumptions are made for convenience, and can be changed with little consequence. So, which is the nature of the assumptions (1 & 2) in question? Investigating this is (tacitly perhaps) one of the main …",
"title": ""
},
{
"docid": "d3d5f135cc2a09bf0dfc1ef88c6089b5",
"text": "In this paper, we present the Expert Hub System, which was designed to help governmental structures find the best experts in different areas of expertise for better reviewing of the incoming grant proposals. In order to define the areas of expertise with topic modeling and clustering, and then to relate experts to corresponding areas of expertise and rank them according to their proficiency in certain areas of expertise, the Expert Hub approach uses the data from the Directorate of Science and Technology Programmes. Furthermore, the paper discusses the use of Big Data and Machine Learning in the Russian government",
"title": ""
}
] |
scidocsrr
|
b9e91f2208a37b2bd607a4b86557a532
|
Convolutional gated recurrent neural network incorporating spatial features for audio tagging
|
[
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
}
] |
[
{
"docid": "f7271973731a60fdf7452030470612fa",
"text": "Recommender systems play a crucial role in mitigating the information overload problem in social media by suggesting relevant information to users. The popularity of pervasively available social activities for social media users has encouraged a large body of literature on exploiting social networks for recommendation. The vast majority of these systems focus on unsigned social networks (or social networks with only positive links), while little work exists for signed social networks (or social networks with positive and negative links). The availability of negative links in signed social networks presents both challenges and opportunities in the recommendation process. We provide a principled and mathematical approach to exploit signed social networks for recommendation, and propose a model, RecSSN, to leverage positive and negative links in signed social networks. Empirical results on real-world datasets demonstrate the effectiveness of the proposed framework. We also perform further experiments to explicitly understand the effect of signed networks in RecSSN.",
"title": ""
},
{
"docid": "f49090ba1157dcdc5666a58043452ea4",
"text": "A large number of algorithms have been developed to perform non-rigid registration and it is a tool commonly used in medical image analysis. The free-form deformation algorithm is a well-established technique, but is extremely time consuming. In this paper we present a parallel-friendly formulation of the algorithm suitable for graphics processing unit execution. Using our approach we perform registration of T1-weighted MR images in less than 1 min and show the same level of accuracy as a classical serial implementation when performing segmentation propagation. This technology could be of significant utility in time-critical applications such as image-guided interventions, or in the processing of large data sets.",
"title": ""
},
{
"docid": "85c124fd317dc7c2e5999259d26aa1db",
"text": "This paper presents a method for extracting rotation-invariant features from images of handwriting samples that can be used to perform writer identification. The proposed features are based on the Hinge feature [1], but incorporating the derivative between several points along the ink contours. Finally, we concatenate the proposed features into one feature vector to characterize the writing styles of the given handwritten text. The proposed method has been evaluated using Fire maker and IAM datasets in writer identification, showing promising performance gains.",
"title": ""
},
{
"docid": "c9a8587ea80bc4c444dcfe98844c5049",
"text": "Dealing with multiple labels is a supervised learning problem of increasing importance. However, in some tasks, certain learning algorithms produce a confidence score vector for each label that needs to be classified as relevant or irrelevant. More importantly, multi-label models are learnt in training conditions called operating conditions, which most likely change in other contexts. In this work, we explore the existing thresholding methods of multi-label classification by considering that label costs are operating conditions. This paper provides an empirical comparative study of these approaches by calculating the empirical loss over range of operating conditions. It also contributes two new methods in multilabel classification that have been used in binary classification: score-driven and one optimal.",
"title": ""
},
{
"docid": "a3e36252f25a9fe6f46c729fb8a2f157",
"text": "Although significant advances have been made in the area of human poses estimation from images using deep Convolutional Neural Network (ConvNet), it remains a big challenge to perform 3D pose inference in-the-wild. This is due to the difficulty to obtain 3D pose groundtruth for outdoor environments. In this paper, we propose a novel framework to tackle this problem by exploiting the information of each bone indicating if it is forward or backward with respect to the view of the camera(we term it Forwardor-Backward Information abbreviated as FBI). Our method firstly trains a ConvNet with two branches which maps an image of a human to both the 2D joint locations and the FBI of bones. These information is further fed into a deep regression network to predict the 3D positions of joints. To support the training, we also develop an annotation user interface and labeled such FBI for around 12K in-the-wild images which are randomly selected from MPII (a public dataset of 2D pose annotation). Our experimental results on the standard benchmarks demonstrate that our approach outperforms state-of-the-art methods both qualitatively and quantitatively.",
"title": ""
},
{
"docid": "aad3945a69f57049c052bcb222f1b772",
"text": "The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.",
"title": ""
},
{
"docid": "cee4018679662d7e2aeaefa624e52a77",
"text": "While video games have traditionally been considered simple entertainment devices, nowadays they occupy a privileged position in the leisure and entertainment market, representing the fastest-growing industry globally. We regard the video game as a special type of interactive system whose principal aim is to provide the player with fun and entertainment. In this paper we will analyse how, in Video Games context, Usability alone is not sufficient to achieve the optimum Player Experience. It needs broadening and deepening, to embrace further attributes and properties that identify and describe the Player Experience. We present our proposed means of defining Playability. We also introduce the notion of Facets of Playability. Each facet will allow us to characterize the Playability easily, and associate them with the different elements of a video game. To guarantee the optimal Player Experience, Playability needs to be assessed throughout the entire video game development process, taking a Player-Centred Video Game Design approach.",
"title": ""
},
{
"docid": "3d9bed630cbf56169df6e943740a9b2a",
"text": "We have previously shown that disease differing widely in severity and prognosis is included in the entity, dengue hemorrhagic fever (DHF) in Thailand.\"2 Nearly 40 percent of 523 children hospitalized with DHF had a syndrome characterized by shock following a fever of several days duration. The rest had less severe febrile illnesses with various mild hemorrhagic manifestations. When sera from these children were studied, a significant correlation between shocked patients and a secondary-type antibody response to dengue virus was found.8 This report considers in greater detail the various manifestations of dengue disease and the antibody response in the host. Observations are included that suggest that severity of host response to dengue infection is influenced by an interaction between immune status and the age and sex of the patient. Associations between severity of illness and the rate of virus recovery, the quantity of antibody produced and the type of dengue virus recovered are described. A synthesis of these data and their relevance to the pathogenesis of human dengue infection are presented in the final paper in this series. MATERIALS AND METHODS Patient selection, serologic methods, virus isolation and identification techniques and definition of primary and secondary dengue antibody responses have been described .',',' This report is based upon experience with 604 dengue hemorrhagic fever patients: 400 admitted to Children's 81 fatal hemorrhagic fever cases admitted to other Bangkok hospitals in 1962-1964. All DHF patients who survived their infection had serologic evidence of recent dengue infection according to established criteria' and a discharge diagnosis of hemorrhagic fever. Included for some evaluations are 35 out",
"title": ""
},
{
"docid": "f62943740d566123632a6814563b3b7e",
"text": "In this paper, a new framework based on matrix theory is proposed to analyze and design cooperative controls for a group of individual dynamical systems whose outputs are sensed by or communicated to others in an intermittent, dynamically changing, and local manner. In the framework, sensing/communication is described mathematically by a time-varying matrix whose dimension is equal to the number of dynamical systems in the group and whose elements assume piecewise-constant and binary values. Dynamical systems are generally heterogeneous and can be transformed into a canonical form of different, arbitrary, but finite relative degrees. Utilizing a set of new results on augmentation of irreducible matrices and on lower triangulation of reducible matrices, the framework allows a designer to study how a general local-and-output-feedback cooperative control can determine group behaviors of the dynamical systems and to see how changes of sensing/communication would impact the group behaviors over time. A necessary and sufficient condition on convergence of a multiplicative sequence of reducible row-stochastic (diagonally positive) matrices is explicitly derived, and through simple choices of a gain matrix in the cooperative control law, the overall closed-loop system is shown to exhibit cooperative behaviors (such as single group behavior, multiple group behaviors, adaptive cooperative behavior for the group, and cooperative formation including individual behaviors). Examples, including formation control of nonholonomic systems in the chained form, are used to illustrate the proposed framework.",
"title": ""
},
{
"docid": "440a6b8b41a98e392ec13a5e13d7e7ba",
"text": "A classical heuristic in software testing is to reward diversity, which implies that a higher priority must be assigned to test cases that differ the most from those already prioritized. This approach is commonly known as similarity-based test prioritization (SBTP) and can be realized using a variety of techniques. The objective of our study is to investigate whether SBTP is more effective at finding defects than random permutation, as well as determine which SBTP implementations lead to better results. To achieve our objective, we implemented five different techniques from the literature and conducted an experiment using the defects4j dataset, which contains 395 real faults from six real-world open-source Java programs. Findings indicate that running the most dissimilar test cases early in the process is largely more effective than random permutation (Vargha–Delaney A [VDA]: 0.76–0.99 observed using normalized compression distance). No technique was found to be superior with respect to the effectiveness. Locality-sensitive hashing was, to a small extent, less effective than other SBTP techniques (VDA: 0.38 observed in comparison to normalized compression distance), but its speed largely outperformed the other techniques (i.e., it was approximately 5–111 times faster). Our results bring to mind the well-known adage, “don’t put all your eggs in one basket”. To effectively consume a limited testing budget, one should spread it evenly across different parts of the system by running the most dissimilar test cases early in the testing process.",
"title": ""
},
{
"docid": "4f263c1b43c35f32f2a8d3cfbb380bc1",
"text": "In this article, we explore creativity alongside educational technology, as fundamental constructs of 21st century education. Creativity has becoming increasingly important, as one of the most important and noted skills for success in the 21st century. We offer a definition of creativity; and draw upon a systems model of creativity, to suggest creativity emerges and exists within a system, rather than only at the level of individual processes. We suggest that effective infusion of creativity and technology in education must be considered in a three-fold systemic manner: at the levels of teacher education, assessment and educational policy. We provide research and practical implications with broad recommendations across these three areas, to build discourse around infusion of creative thinking and technology in 21st century educational systems.",
"title": ""
},
{
"docid": "eddeeb5b00dc7f82291b3880956e2f01",
"text": "This study aims at building a robust method for semiautomated information extraction of pavement markings detected from mobile laser scanning (MLS) point clouds. The proposed workflow consists of three components: 1) preprocessing, 2) extraction, and 3) classification. In preprocessing, the three-dimensional (3-D) MLS point clouds are converted into radiometrically corrected and enhanced two-dimensional (2-D) intensity imagery of the road surface. Then, the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu's thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters by using a manually defined decision tree. A study was conducted by using the MLS dataset acquired in Xiamen, Fujian, China. The results demonstrated that the proposed workflow and method can achieve 92% in completeness, 95% in correctness, and 94% in F-score.",
"title": ""
},
{
"docid": "5ed8f3b58ae1320411f15a4d7c0f5634",
"text": "With the advent of the ubiquitous era, context-based music recommendation has become one of rapidly emerging applications. Context-based music recommendation requires multidisciplinary efforts including low level feature extraction, music mood classification and human emotion prediction. Especially, in this paper, we focus on the implementation issues of context-based mood classification and music recommendation. For mood classification, we reformulate it into a regression problem based on support vector regression (SVR). Through the use of the SVR-based mood classifier, we achieved 87.8% accuracy. For music recommendation, we reason about the user's mood and situation using both collaborative filtering and ontology technology. We implement a prototype music recommendation system based on this scheme and report some of the results that we obtained.",
"title": ""
},
{
"docid": "d2574ee0353b1889c2187a889b1d7a41",
"text": "Over recent years, the world has seen multiple uses for conversational agents. Chatbots has been implemented into ecommerce systems, such as Amazon Echo's Alexa [1]. Businesses and organizations like Facebook are also implementing bots into their applications. While a number of amazing chatbot platform exists, there are still difficulties in creating data-driven-systems as they large amount of data is needed for development and training. This paper we describe an advanced platform for evaluating and annotating human-chatbot interactions, its main features and goals, as well as the future plans we have for it.",
"title": ""
},
{
"docid": "53445e289a7472c52e0bccae9f255d8d",
"text": "This paper analyses a ZVS isolated active clamp Sepic converter for high power LED applications. Due to the recent advancement in the light emitting diode technology, high brightness, high efficient LEDs becomes achievable in residential, industry and commercial applications to replace the incandescent bulbs, halogen bulbs, and even compact fluorescent light bulbs. Generally in these devices, the lumen is proportional to the current, so the converter has to control the LED string current, and for high power applications (greater than 100W), is preferable to have a galvanic isolation between the bus and the output; among different isolated topologies and taking into account the large input voltage variation in the application this paper is targeting, a ZVS active clamp Sepic converter has been adopted. Due to its circuit configuration, it can step up or down the input voltage, allowing a universal use, with lamps with different voltages and powers. A 300W, 5A, 48V input voltage prototype has been developed, and a peak efficiency of 91% has been reached without synchronous rectification.",
"title": ""
},
{
"docid": "36cc985d2d86c4047533550293e8c7f4",
"text": "The pyISC is a Python API and extension to the C++ based Incremental Stream Clustering (ISC) anomaly detection and classification framework. The framework is based on parametric Bayesian statistical inference using the Bayesian Principal Anomaly (BPA), which enables to combine the output from several probability distributions. pyISC is designed to be easy to use and integrated with other Python libraries, specifically those used for data science. In this paper, we show how to use the framework and we also compare its performance to other well-known methods on 22 real-world datasets. The simulation results show that the performance of pyISC is comparable to the other methods. pyISC is part of the Stream toolbox developed within the STREAM project.",
"title": ""
},
{
"docid": "febf797870da28d6492885095b92ef1f",
"text": "Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.",
"title": ""
},
{
"docid": "47a8262ee31f7657046e794dd28d6738",
"text": "Transferring deformation from a source shape to a target shape is a very useful technique in computer graphics. State-of-the-art deformation transfer methods require either point-wise correspondences between source and target shapes, or pairs of deformed source and target shapes with corresponding deformations. However, in most cases, such correspondences are not available and cannot be reliably established using an automatic algorithm. Therefore, substantial user effort is needed to label the correspondences or to obtain and specify such shape sets. In this work, we propose a novel approach to automatic deformation transfer between two unpaired shape sets without correspondences. 3D deformation is represented in a high-dimensional space. To obtain a more compact and effective representation, two convolutional variational autoencoders are learned to encode source and target shapes to their latent spaces. We exploit a Generative Adversarial Network (GAN) to map deformed source shapes to deformed target shapes, both in the latent spaces, which ensures the obtained shapes from the mapping are indistinguishable from the target shapes. This is still an under-constrained problem, so we further utilize a reverse mapping from target shapes to source shapes and incorporate cycle consistency loss, i.e. applying both mappings should reverse to the input shape. This VAE-Cycle GAN (VC-GAN) architecture is used to build a reliable mapping between shape spaces. Finally, a similarity constraint is employed to ensure the mapping is consistent with visual similarity, achieved by learning a similarity neural network that takes the embedding vectors from the source and target latent spaces and predicts the light field distance between the corresponding shapes. Experimental results show that our fully automatic method is able to obtain high-quality deformation transfer results with unpaired data sets, comparable or better than existing methods where strict correspondences are required.",
"title": ""
},
{
"docid": "99dc118b4e0754bd8a57bdde63243242",
"text": "We present a fully implicit Eulerian technique for simulating free surface viscous liquids which eliminates artifacts in previous approaches, efficiently supports variable viscosity, and allows the simulation of more compelling viscous behaviour than previously achieved in graphics. Our method exploits a variational principle which automatically enforces the complex boundary condition on the shear stress at the free surface, while giving rise to a simple discretization with a symmetric positive definite linear system. We demonstrate examples of our technique capturing realistic buckling, folding and coiling behavior. In addition, we explain how to handle domains whose boundary comprises both ghost fluid Dirichlet and variational Neumann parts, allowing correct behaviour at free surfaces and solid walls for both our viscous solve and the variational pressure projection of Batty et al. [BBB07].",
"title": ""
},
{
"docid": "d8ec0c507217500a97c1664c33b2fe72",
"text": "To realize ideal force control of robots that interact with a human, a very precise actuating system with zero impedance is desired. For such applications, a rotary series elastic actuator (RSEA) has been introduced recently. This paper presents the design of RSEA and the associated control algorithms. To generate joint torque as desired, a torsional spring is installed between a motor and a human joint, and the motor is controlled to produce a proper spring deflection for torque generation. When the desired torque is zero, the motor must follow the human joint motion, which requires that the friction and the inertia of the motor be compensated. The human joint and the body part impose the load on the RSEA. They interact with uncertain environments and their physical properties vary with time. In this paper, the disturbance observer (DOB) method is applied to make the RSEA precisely generate the desired torque under such time-varying conditions. Based on the nominal model preserved by the DOB, feedback and feedforward controllers are optimally designed for the desired performance, i.e., the RSEA: (1) exhibits very low impedance and (2) generates the desired torque precisely while interacting with a human. The effectiveness of the proposed design is verified by experiments.",
"title": ""
}
] |
scidocsrr
|
aaf707f4c0c576750216dd53386fc22c
|
World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions
|
[
{
"docid": "49387b129347f7255bf77ad9cc726275",
"text": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
}
] |
[
{
"docid": "dd634fe7f5bfb5d08d0230c3e64220a4",
"text": "Living in an oxygenated environment has required the evolution of effective cellular strategies to detect and detoxify metabolites of molecular oxygen known as reactive oxygen species. Here we review evidence that the appropriate and inappropriate production of oxidants, together with the ability of organisms to respond to oxidative stress, is intricately connected to ageing and life span.",
"title": ""
},
{
"docid": "a08b91474b2eefcc66cc74665a632653",
"text": "The present research deals with audio events detection in noisy environments for a multimedia surveillance application. In surveillance or homeland security most of the systems aiming to automatically detect abnormal situations are only based on visual clues while, in some situations, it may be easier to detect a given event using the audio information. This is in particular the case for the class of sounds considered in this paper, sounds produced by gun shots. The automatic shot detection system presented is based on a novelty detection approach which offers a solution to detect abnormality (abnormal audio events) in continuous audio recordings of public places. We specifically focus on the robustness of the detection against variable and adverse conditions and the reduction of the false rejection rate which is particularly important in surveillance applications. In particular, we take advantage of potential similarity between the acoustic signatures of the different types of weapons by building a hierarchical classification system",
"title": ""
},
{
"docid": "a799bba2a5d56d45e3b0569119ee8ad2",
"text": "There has been much research investigating team cognition, naturalistic decision making, and collaborative technology as it relates to real world, complex domains of practice. However, there has been limited work in incorporating naturalistic decision making models for supporting distributed team decision making. The aim of this research is to support human decision making teams using cognitive agents empowered by a collaborative Recognition-Primed Decision model. In this paper, we first describe an RPD-enabled agent architecture (R-CAST), in which we have implemented an internal mechanism of decision-making adaptation based on collaborative expectancy monitoring, and an information exchange mechanism driven by relevant cue analysis. We have evaluated R-CAST agents in a real-time simulation environment, feeding teams with frequent decision-making tasks under different tempo situations. While the result conforms to psychological findings that human team members are extremely sensitive to their workload in high-tempo situations, it clearly indicates that human teams, when supported by R-CAST agents, can perform better in the sense that they can maintain team performance at acceptable levels in high time pressure situations.",
"title": ""
},
{
"docid": "b04f42415573e0ada85afcf7f419a3ae",
"text": "Numerous embedding models have been recently explored to incorporate semantic knowledge into visual recognition. Existing methods typically focus on minimizing the distance between the corresponding images and texts in the embedding space but do not explicitly optimize the underlying structure. Our key observation is that modeling the pairwise image-image relationship improves the discrimination ability of the embedding model. In this paper, we propose the structured discriminative and difference constraints to learn visual-semantic embeddings. First, we exploit the discriminative constraints to capture the intraand inter-class relationships of image embeddings. The discriminative constraints encourage separability for image instances of different classes. Second, we align the difference vector between a pair of image embeddings with that of the corresponding word embeddings. The difference constraints help regularize image embeddings to preserve the semantic relationships among word embeddings. Extensive evaluations demonstrate the effectiveness of the proposed structured embeddings for single-label classification, multilabel classification, and zero-shot recognition.",
"title": ""
},
{
"docid": "dc26b875377ae8a5f1d8c85323773fa0",
"text": "In software evolution, developers typically need to identify whether the failure of a test is due to a bug in the source code under test or the obsoleteness of the test code when they execute a test suite. Only after finding the cause of a failure can developers determine whether to fix the bug or repair the obsolete test. Researchers have proposed several techniques to automate test repair. However, test-repair techniques typically assume that test failures are always due to obsolete tests. Thus, such techniques may not be applicable in real world software evolution when developers do not know whether the failure is due to a bug or an obsolete test. To know whether the cause of a test failure lies in the source code under test or in the test code, we view this problem as a classification problem and propose an automatic approach based on machine learning. Specifically, we target Java software using the JUnit testing framework and collect a set of features that may be related to failures of tests. Using this set of features, we adopt the Best-first Decision Tree Learning algorithm to train a classifier with some existing regression test failures as training instances. Then, we use the classifier to classify future failed tests. Furthermore, we evaluated our approach using two Java programs in three scenarios (within the same version, within different versions of a program, and between different programs), and found that our approach can effectively classify the causes of failed tests.",
"title": ""
},
{
"docid": "49663600aeff26af65fbfe39f2ed0161",
"text": "Misuse cases and attack trees have been suggested for security requirements elicitation and threat modeling in software projects. Their use is believed to increase security awareness throughout the software development life cycle. Experiments have identified strengths and weaknesses of both model types. In this paper we present how misuse cases and attack trees can be linked to get a high-level view of the threats towards a system through misuse case diagrams and a more detailed view on each threat through attack trees. Further, we introduce links to security activity descriptions in the form of UML activity graphs. These can be used to describe mitigating security activities for each identified threat. The linking of different models makes most sense when security modeling is supported by tools, and we present the concept of a security repository that is being built to store models and relations such as those presented in this paper.",
"title": ""
},
{
"docid": "25975b36f0276e5c3a257d27fe6d6907",
"text": "The computer processing of forward-look sonar video imagery enables significant capabilities in a wide variety of underwater operations within turbid environments. Accurate automated registration of sonar video images to complement measurements from traditional positioning devices can be instrumental in the detection, localization, and tracking of distinct scene targets, building feature maps, change detection, as well as improving precision in the positioning of unmanned submarines. This work offers a novel solution for the registration of two-dimensional (2-D) forward-look sonar images recorded from a mobile platform, by optimization over the sonar 3-D motion parameters. It incorporates the detection of key features and landmarks, and effectively represents them with Gaussian maps. Improved performance is demonstrated with respect to the state-of-the-art approach utilizing 2-D similarity transformation, based on experiments with real data. C © 2013 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "69058572e8baaef255a3be6ac9eef878",
"text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.",
"title": ""
},
{
"docid": "0080aa23209d70192bb13b9451082803",
"text": "This paper studies the problem of secret-message transmission over a wiretap channel with correlated sources in the presence of an eavesdropper who has no source observation. A coding scheme is proposed based on a careful combination of 1) Wyner-Ziv's source coding to generate secret key from correlated sources based on a certain cost on the channel, 2) one-time pad to secure messages without additional cost, and 3) Wyner's secrecy coding to achieve secrecy based on the advantage of legitimate receiver's channel over the eavesdropper's. The work sheds light on optimal strategies for practical code design for secure communication/storage systems.",
"title": ""
},
{
"docid": "016eca10ff7616958ab8f55af71cf5d7",
"text": "This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.",
"title": ""
},
{
"docid": "01f92f1028201ff5790b4f20ef84618c",
"text": "The need for high-frequency, low-power, wide temperature range, precision on-chip reference clock generation makes relaxation oscillator topology an attractive solution for various automotive applications. This paper presents for the first time a 140MHz relaxation oscillator with robust-against-process-variation temperature compensation scheme. The high-frequency relaxation oscillator achieves 28 ppm/°C frequency stability over the automotive temperature range from −40 to 175°C. The circuit is fabricated in 40nm CMOS technology, occupies 0.009 mm2 and consumes 294µW from 1.2V supply.",
"title": ""
},
{
"docid": "299e83e39c7bb567bd52a3550385eb69",
"text": "Individuals who suffer from schizophrenia comprise I percent of the United States population and are four times more likely to die of suicide than the general US population. Identification of at-risk individuals with schizophrenia is challenging when they do not seek treatment. Microblogging platforms allow users to share their thoughts and emotions with the world in short snippets of text. In this work, we leveraged the large corpus of Twitter posts and machine-learning methodologies to detect individuals with schizophrenia. Using features from tweets such as emoticon use, posting time of day, and dictionary terms, we trained, built, and validated several machine learning models. Our support vector machine model achieved the best performance with 92% precision and 71% recall on the held-out test set. Additionally, we built a web application that dynamically displays summary statistics between cohorts. This enables outreach to undiagnosed individuals, improved physician diagnoses, and destigmatization of schizophrenia.",
"title": ""
},
{
"docid": "e8167685fcbcea1a4c6a825e50eb45d2",
"text": "Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents.",
"title": ""
},
{
"docid": "ea75bf062f21a12aacd88ccb61ba47a0",
"text": "This paper describes a Twitter sentiment analysis system that classifies a tweet as positive or negative based on its overall tweet-level polarity. Supervised learning classifiers often misclassify tweets containing conjunctions such as “but” and conditionals such as “if”, due to their special linguistic characteristics. These classifiers also assign a decision score very close to the decision boundary for a large number tweets, which suggests that they are simply unsure instead of being completely wrong about these tweets. To counter these two challenges, this paper proposes a system that enhances supervised learning for polarity classification by leveraging on linguistic rules and sentic computing resources. The proposed method is evaluated on two publicly available Twitter corpora to illustrate its effectiveness.",
"title": ""
},
{
"docid": "4f2cde3d333ae835b4a91ffdf50cfbe7",
"text": "Article history: Received 1 July 2007 Received in revised form 15 March 2008 Accepted 20 March 2008 Business intelligence (BI) systems provide the ability to analyse business information in order to support and improve management decision making across a broad range of business activities. They leverage the large data infrastructure investments (e.g. ERP systems) made by firms, and have the potential to realise the substantial value locked up in a firm's data resources. While substantial business investment in BI systems is continuing to accelerate, there is a complete absence of a specific and rigorous method to measure the realised business value, if any. By exploiting the lessons learned from prior attempts to measure business valueof IT-intensive systems,wedevelopa newmeasure that is based on an understanding of the characteristics of BI systems in a process-oriented framework. We then employ the measure in an examination of the relationship between the business process performance and organizational performance, finding significant differences in the strength of the relationship between industry sectors. This study reinforces the need to consider the specific context of use when designing performance measurement for IT-intensive systems, and highlights the need for further research examining contextual moderators to the realisation of such performance benefits. Crown Copyright © 2008 Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "037ff53b19c51dca7ce6418e8dbbc4f8",
"text": "Critical driver genomic events in colorectal cancer have been shown to affect the response to targeted agents that were initially developed under the 'one gene, one drug' paradigm of precision medicine. Our current knowledge of the complexity of the cancer genome, clonal evolution patterns under treatment pressure and pharmacodynamic effects of target inhibition support the transition from a one gene, one drug approach to a 'multi-gene, multi-drug' model when making therapeutic decisions. Better characterization of the transcriptomic subtypes of colorectal cancer, encompassing tumour, stromal and immune components, has revealed convergent pathway dependencies that mandate a 'multi-molecular' perspective for the development of therapies to treat this disease.",
"title": ""
},
{
"docid": "487256aef0ec451e90eb836aec3ec278",
"text": "In this paper, we introduce profile view (PV) lip reading, a scheme for speaker-dependent isolated word speech recognition. We provide historic motivation for PV from the importance of profile images in facial animation for lip reading, and we present feature extraction schemes for PV as well as for the traditional frontal view (FV) approach. We compare lip reading results for PV and FV, which demonstrate a significant improvement for PV over FV. We show improvement in speech recognition with the integration of audio and visual features. We also found it advantageous to process the visual features over a longer duration than the duration marked by the endpoints of the speech utterance.",
"title": ""
},
{
"docid": "788ea4ece8631c81366e571eb205739f",
"text": "ABSTgACT. Tree pattern matching is an interesting special problem which occurs as a crucial step m a number of programmmg tasks, for instance, design of interpreters for nonprocedural programming languages, automatic implementations of abstract data types, code optimization m compilers, symbohc computation, context searching in structure editors, and automatic theorem provmg. As with the sorting problem, the variations in requirements and resources for each application seem to preclude a uniform, umversal solution to the tree-pattern-matching problem. Instead, a collection of well-analyzed techmques, from which specific applications may be selected and adapted, should be sought. Five new techniques for tree pattern matching are presented, analyzed for time and space complexity, and compared with previously known methods. Particularly important are applications where the same patterns are matched against many subjects and where a subject may be modified incrementally Therefore, methods which spend some tune preprocessmg patterns in order to improve the actual matching time are included",
"title": ""
},
{
"docid": "7c2cb105e5fad90c90aea0e59aae5082",
"text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when",
"title": ""
}
] |
scidocsrr
|
32f11f2a5a9b15d25f4e7fda4c004cef
|
OctoMap: an efficient probabilistic 3D mapping framework based on octrees
|
[
{
"docid": "3571e2646d76d5f550075952cb75ba30",
"text": "Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima",
"title": ""
},
{
"docid": "781f82087acbc42e47e52751b1e2a88b",
"text": "This paper presents an algorithm for segmenting 3D point clouds. It extends terrain elevation models by incorporating two types of representations: (1) ground representations based on averaging the height in the point cloud, (2) object models based on a voxelisation of the point cloud. The approach is deployed on Riegl data (dense 3D laser data) acquired in a campus type of environment and compared against six other terrain models. Amongst elevation models, it is shown to provide the best fit to the data as well as being unique in the sense that it jointly performs ground extraction, overhang representation and 3D segmentation. We experimentally demonstrate that the resulting model is also applicable to path planning.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
},
{
"docid": "f35e9fb9fbcc0e610d681ac326899a90",
"text": "In this paper we present a progressive compression method for point sampled models that is specifically apt at dealing with densely sampled surface geometry. The compression is lossless and therefore is also suitable for storing the unfiltered, raw scan data. Our method is based on an octree decomposition of space. The point-cloud is encoded in terms of occupied octree-cells. To compress the octree we employ novel prediction techniques that were specifically designed for point sampled geometry and are based on local surface approximations to achieve high compression rates that outperform previous progressive coders for point-sampled geometry. Moreover we demonstrate that additional point attributes, such as color, which are of great importance for point-sampled geometry, can be well integrated and efficiently encoded in this framework.",
"title": ""
}
] |
[
{
"docid": "e7e07b3f603b72b6f2562857762a7af8",
"text": "Coastal visits not only provide psychological benefits but can also contribute to the accumulation of rubbish. Volunteer beach cleans help address this issue, but may only have limited, local impact. Consequently, it is important to study any broader benefits associated with beach cleans. This article examines the well-being and educational value of beach cleans, as well as their impacts on individuals' behavioral intentions. We conducted an experimental study that allocated students (n = 90) to a beach cleaning, rock pooling, or walking activity. All three coastal activities were associated with positive mood and pro-environmental intentions. Beach cleaning and rock pooling were associated with higher marine awareness. The unique impacts of beach cleaning were that they were rated as most meaningful but linked to lower restorativeness ratings of the environment compared with the other activities. This research highlights the interplay between environment and activities, raising questions for future research on the complexities of person-environment interactions.",
"title": ""
},
{
"docid": "f4cc2848713439b162dc5fc255c336d2",
"text": "We consider the problem of waveform design for multiple input/multiple output (MIMO) radars, where the transmit waveforms are adjusted based on target and clutter statistics. A model for the radar returns which incorporates the transmit waveforms is developed. The target detection problem is formulated for that model. Optimal and suboptimal algorithms are derived for designing the transmit waveforms under different assumptions regarding the statistical information available to the detector. The performance of these algorithms is illustrated by computer simulation.",
"title": ""
},
{
"docid": "1d906388d54a2a7b9db3939f0c6039b5",
"text": "This paper suggests refinements and extensions of the JDL Data Fusion Model, the standard process model used for a multiplicity of community purposes. However, this Model has not been reviewed in accordance with (a) the dynamics of world events and (b) the changes, discoveries, and new methods in both the data fusion research and development community and related IT technologies. This paper suggests ways to revise and extend this important model. Proposals are made regarding (a) improvements in the understanding of internal processing within a fusion node and (b) extending the model to include (1) remarks on issues related to quality control, reliability, and consistency in DF processing, (2) assertions about the need for co-processing of abductive/ inductive and deductive inferencing processes, (3) remarks about the need for and exploitation of an onto logicallybased approach to DF process design, and ( 4) extensions to account for the case of Distributed Data Fusion (DDF).",
"title": ""
},
{
"docid": "9da1449675af42a2fc75ba8259d22525",
"text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud",
"title": ""
},
{
"docid": "ab132902ce21c35d4b5befb8ff2898b5",
"text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.",
"title": ""
},
{
"docid": "4c46d2cbc52dbc2780e002651d88f3a7",
"text": "Processing in memory (PIM) implemented via 3D die stacking has been recently proposed to reduce the widening gap between processor and memory performance. By moving computation that demands high memory bandwidth to the base logic die of a 3D memory stack, PIM promises significant improvements in energy efficiency. However, the vision of PIM implemented via 3D die stacking could potentially be derailed if the processor(s) raise the stack’s temperature to unacceptable levels. In this paper, we study the thermal constraints for PIM across different processor organizations and cooling solutions and show the range of designs that are viable under different conditions. We also demonstrate that PIM is feasible even with low-end, fanless cooling solutions. We believe these results help alleviate PIM thermal feasibility concerns and identify viable design points, thereby encouraging further exploration and research in novel PIM architectures, technologies, and use cases.",
"title": ""
},
{
"docid": "9ade6407ce2603e27744df1b03728bfc",
"text": "We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.",
"title": ""
},
{
"docid": "48f06ed96714c2970550fef88d21d517",
"text": "Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?",
"title": ""
},
{
"docid": "d747857cda669738cc8c27cc0a92a95d",
"text": "Angle of Arrival (AoA) estimation that applies wideband channel estimation is superior to the narrowband MUSIC (multiple signal classification) approach, even when averaging its results over the entire relevant band. This work reports the results of indoor AoA estimation based on wideband propagation channel measurements taken over a uniform linear antenna array. The measurements were carried out around 2.4 GHz, with 50 to 800 MHz bandwidths.",
"title": ""
},
{
"docid": "50df49f3c9de66798f89fdeab9d2ae85",
"text": "Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings– such as sentencing, hiring, policing, college admissions, and parole decisions– is the perceived “neutrality” of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our procedure accommodates data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. In the process, we demonstrate that a common approach to creating “race-neutral” models– omitting race as a covariate– still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.",
"title": ""
},
{
"docid": "ceb9e37cee390fac163154b70808f89d",
"text": "This study extends the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model to investigate factors affecting the acceptance and use of a social networking service (SNS) called Instagram. The UTAUT2 model is modified to better suit the context of SNSs by replacing the price value construct with self-congruence. Furthermore, we explore the effects of behavioral intention and use behavior on \"user indegree\" defined as the number of people who follow an SNS user. The results of the survey study largely support the hypothesized model in the context of Instagram. The findings contribute to previous knowledge by demonstrating the important roles of hedonic motivation and habit in consumer acceptance and use of SNSs, and by providing novel insights into how users can attract followers within the social networks.",
"title": ""
},
{
"docid": "2a13609a94050c4477d94cf0d89cbdd3",
"text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.",
"title": ""
},
{
"docid": "6fa2d4ed8d92c158fc220265b550552e",
"text": "A formal framework for software development and analysis is presented, which aims at reducing the gap between formal specification and implementation by integrating the two and allowing them together to form a system. It is called monitoring-oriented programming (MOP), since runtime monitoring is supported and encouraged as a fundamental principle. Monitors are automatically synthesized from formal specifications and integrated at appropriate places in the program, according to user-configurable attributes. Violations and/or validations of specifications can trigger user-defined code at any points in the program, in particular recovery code, outputting/sending messages, or raising exceptions. The major novelty of MOP is its generality w.r.t. logical formalisms: it allows users to insert their favorite or domain-specific specification formalisms via logic plug-in modules. A WWW repository has been created, allowing MOP users to download and upload logic plugins. An experimental prototype tool, called Java-MOP, is also discussed, which currently supports most but not all of the desired MOP features.",
"title": ""
},
{
"docid": "e8e8869d74dd4667ceff63c8a24caa27",
"text": "We address the problem of recommending suitable jobs to people who are seeking a new job. We formulate this recommendation problem as a supervised machine learning problem. Our technique exploits all past job transitions as well as the data associated with employees and institutions to predict an employee's next job transition. We train a machine learning model using a large number of job transitions extracted from the publicly available employee profiles in the Web. Experiments show that job transitions can be accurately predicted, significantly improving over a baseline that always predicts the most frequent institution in the data.",
"title": ""
},
{
"docid": "1bcf81291a4e973857eb9834c6ac9999",
"text": "Timed automata are nite-state machines constrained by timing requirements so that they accept timed words | words in which every symbol is labeled with a real-valued time. These automata were designed to lead to a theory of nite-state real-time properties with applications to the automatic veri cation of real-time systems. However, both deterministic and nondeterministic versions su er from drawbacks: several key problems, such as language inclusion, are undecidable for nondeterministic timed automata, whereas deterministic timed automata lack considerable expressive power when compared to decidable real-time logics. This is why we introduce two-way timed automata | timed automata that can move back and forth while reading a timed word. Two-wayness in its unrestricted form leads, like nondeterminism, to the undecidability of language inclusion. However, if we restrict the number of times an input symbol may be revisited, then two-wayness is both harmless and desirable. We show that the resulting class of bounded two-way deterministic timed automata is closed under all boolean operations, has decidable (PSPACE-complete) emptiness and inclusion problems, and subsumes all decidable real-time logics we know. We obtain a strict hierarchy of real-time properties: deterministic timed automata can accept more languages as the bound on the number of times an input symbol may be revisited is increased. This hierarchy is also enforced by the number of alternations between past and future operators in temporal logic. The combination of our results leads to a decision procedure for a real-time logic with past operators.",
"title": ""
},
{
"docid": "34a46b80f025cd8cd25243a777b4ff6a",
"text": "This research attempts to investigate the effects of blog marketing on brand attitude and purchase intention. The elements of blog marketing are identified as community identification, interpersonal trust, message exchange, and two-way communication. The relationships among variables are pictured on the fundamental research framework provided by this study. Data were collected via an online questionnaire and 727 useable samples were collected and analyzed utilizing AMOS 5.0. The empirical findings show that the blog marketing elements can impact on brand attitude positively except for the element of community identification. Further, the analysis result also verifies the moderating effects on the relationship between blog marketing elements and brand attitude.",
"title": ""
},
{
"docid": "93fff17c1e704d496a39925e2a3f3e7f",
"text": "The Hausdorff fractal dimension has been a fast-to-calculate method to estimate complexity of fractal shapes. In this work, a modified version of this fractal dimension is presented in order to make it more robust when applied in estimating complexity of non-fractal images. The modified Hausdorff fractal dimension stands on two features that weaken the requirement of presence of a shape and also reduce the impact of the noise possibly presented in the input image. The new algorithm has been evaluated on a set of images of different character with promising performance.",
"title": ""
},
{
"docid": "6be148b33b338193ffbde2683ddc8991",
"text": "Predicting stock exchange rates is receiving increasing attention and is a vital financial problem as it contributes to the development of effective strategies for stock exchange transactions. The forecasting of stock price movement in general is considered to be a thought-provoking and essential task for financial time series' exploration. In this paper, a Least Absolute Shrinkage and Selection Operator (LASSO) method based on a linear regression model is proposed as a novel method to predict financial market behavior. LASSO method is able to produce sparse solutions and performs very well when the numbers of features are less as compared to the number of observations. Experiments were performed with Goldman Sachs Group Inc. stock to determine the efficiency of the model. The results indicate that the proposed model outperforms the ridge linear regression model.",
"title": ""
},
{
"docid": "6fc870c703611e07519ce5fe956c15d1",
"text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"title": ""
},
{
"docid": "fa012857ec951bf6365559ab734e9367",
"text": "The aim of this study is to examine the teachers’ attitudes toward the inclusion of students with special educational needs, in public schools and how these attitudes are influenced by their self-efficacy perceptions. The sample is comprised of 416 preschool, primary and secondary education teachers. The results show that, in general, teachers develop positive attitude toward the inclusive education. Higher self-efficacy was associated rather with their capacity to come up against negative experiences at school, than with their attitude toward disabled learners in the classroom and their ability to meet successfully the special educational needs students. The results are consistent with similar studies and reveal the need of establishing collaborative support networks in school districts and the development of teacher education programs, in order to achieve the enrichment of their knowledge and skills to address diverse needs appropriately.",
"title": ""
}
] |
scidocsrr
|
5bf884c4a8bf5ebdbcac8cac94b0a2f5
|
Joint Subcarrier and CPU Time Allocation for Mobile Edge Computing
|
[
{
"docid": "f58a1b5f4c914a0ab3fcf3e2a8820e45",
"text": "This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel.",
"title": ""
},
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
},
{
"docid": "335a330d7c02f13c0f50823461f4e86f",
"text": "Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources-the transmit precoding matrices of the MUs-and the computational resources-the CPU cycles/second assigned by the cloud to each MU-in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.",
"title": ""
}
] |
[
{
"docid": "0d6e5e20d6a909a6450671feeb4ac261",
"text": "Rita bakalu, a new species, is described from the Godavari river system in peninsular India. With this finding, the genus Rita is enlarged to include seven species, comprising six species found in South Asia, R. rita, R. macracanthus, R. gogra, R. chrysea, R. kuturnee, R. bakalu, and one species R. sacerdotum from Southeast Asia. R. bakalu is distinguished from its congeners by a combination of the following characters: eye diameter 28–39% HL and 20–22 caudal fin rays; teeth in upper jaw uniformly villiform in two patches, interrupted at the midline; palatal teeth well-developed villiform, in two distinct patches located at the edge of the palate. The mtDNA cytochrome C oxidase I sequence analysis confirmed that the R. bakalu is distinct from the other congeners of Rita. Superficially, R. bakalu resembles R. kuturnee, reported from the Godavari and Krishna river systems; however, the two species are discriminated due to differences in the structure of their teeth patches on upper jaw and palate, anal fin originating before the origin of adipose fin, comparatively larger eye diameter, longer mandibular barbels, and vertebral count. The results conclude that the river Godavari harbors a different species of Rita, R. bakalu which is new to science.",
"title": ""
},
{
"docid": "5c90cd6c4322c30efb90589b1a65192e",
"text": "The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted ∗Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: jiangwen@nwpu.edu.cn, jiangwenpaper@hotmail.com Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models.",
"title": ""
},
{
"docid": "b09d23c24625dc17e351d79ce88405b8",
"text": "-This paper presents an overview of feature extraction methods for off-line recognition of segmented (isolated) characters. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representations 6f the characters, such as solid binary characters, character contours, skeletons (thinned characters) or gray-level subimages of each individual character. The feature extraction methods are discussed in terms of invariance properties, reconstructability and expected distortions and variability of the characters. The problem of choosing the appropriate feature extraction method for a given application is also discussed. When a few promising feature extraction methods have been identified, they need to be evaluated experimentally to find the best method for the given application. Feature extraction Optical character recognition Character representation Invariance Reconstructability I. I N T R O D U C T I O N Optical character recognition (OCR) is one of the most successful applications of automatic pattern recognition. Since the mid 1950s, OCR has been a very active field for research and development, ca) Today, reasonably good OCR packages can be bought for as little as $100. However, these are only able to recognize high quality printed text documents or neatly written handprinted text. The current research in OCR is now addressing documents that are not well handled by the available systems, including severely degraded, omnifont machine-printed text and (unconstrained) handwritten text. Also, efforts are being made to achieve lower substitution error rates and reject rates even on good quality machine-printed text, since an experienced human typist still has a much lower error rate, albeit at a slower speed. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance. Our own interest in character recognition is to recognize hand-printed digits in hydrographic maps (Fig. 1), but we have tried not to emphasize this particular application in the paper. Given the large number of feature extraction methods reported in the literature, a newcomer to the field is faced with the following question: which feature ext Author to whom correspondence should be addressed. This work was done while OD. Trier was visiting Michigan State University. traction method is the best for a given application? This question led us to characterize the available feature extraction methods, so that the most promising methods could be sorted out. An experimental evaluation of these few promising methods must still be performed to select the best method for a specific application. In this process, one might find that a specific feature extraction method needs to be further developed. A full performance evaluation of each method in terms of classification accuracy and speed is not within the scope of this review paper. In order to study performance issues, we will have to implement all the feature extraction methods, which is an enormous task. In addition, the performance also depends on the type of classifier used. Different feature types may need different types of classifiers. Also, the classification results reported in the literature are not comparable because they are based on different data sets. Given the vast number of papers published on OCR every year, it is impossible to include all the available feature extraction methods in this survey. Instead, we have tried to make a representative selection to illustrate the different principles that can be used. Two-dimensional (2-D) object classification has several applications in addition to character recognition. These include airplane recognition, 12) recognition of mechanical parts and tools, 13l and tissue classification in medical imaging34) Several of the feature extraction techniques described in this paper for OCR have also been found to be useful in such applications.",
"title": ""
},
{
"docid": "aace50c8446403a9f72b24bce1e88c30",
"text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.",
"title": ""
},
{
"docid": "65914e9526e1e765d11a9faf8f530f23",
"text": "Named Entity Recognition (NER) is a tough task in Chinese social media due to a large portion of informal writings. Existing research uses only limited in-domain annotated data and achieves low performance. In this paper, we utilize both limited in-domain data and enough out-of-domain data using a domain adaptation method. We propose a multichannel LSTM-CRF model that employs different channels to capture general patterns, in-domain patterns and out-of-domain patterns in Chinese social media. The extensive experiments show that our model yields 9.8% improvement over previous state-of-the-art methods. We further find that a shared embedding layer is important and randomly initialized embeddings are better than the pretrained ones.",
"title": ""
},
{
"docid": "7574373f4082ed5245cb1107d1917192",
"text": "Heat exchanger system is widely used in chemical plants because it can sustain wide range of temperature and pressure. The main purpose of a heat exchanger system is to transfer heat from a hot fluid to a cooler fluid, so temperature control of outlet fluid is of prime importance. To control the temperature of outlet fluid of the heat exchanger system a conventional PID controller can be used. Due to inherent disadvantages of conventional control techniques, model based control technique is employed and an internal model based PID controller is developed to control the temperature of outlet fluid of the heat exchanger system. The designed controller regulates the temperature of the outgoing fluid to a desired set point in the shortest possible time irrespective of load and process disturbances, equipment saturation and nonlinearity. The developed internal model based PID controller has demonstrated 84% improvement in the overshoot and 44.6% improvement in settling time as compared to the",
"title": ""
},
{
"docid": "505a9b6139e8cbf759652dc81f989de9",
"text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern",
"title": ""
},
{
"docid": "9697137a72f41fb4fb841e4e1b41be62",
"text": "Cast shadows are an informative cue to the shape of objects. They are particularly valuable for discovering object’s concavities which are not available from other cues such as occluding boundaries. We propose a new method for recovering shape from shadows which we call shadow carving. Given a conservative estimate of the volume occupied by an object, it is possible to identify and carve away regions of this volume that are inconsistent with the observed pattern of shadows. We prove a theorem that guarantees that when these regions are carved away from the shape, the shape still remains conservative. Shadow carving overcomes limitations of previous studies on shape from shadows because it is robust with respect to errors in shadows detection and it allows the reconstruction of objects in the round, rather than just bas-reliefs. We propose a reconstruction system to recover shape from silhouettes and shadow carving. The silhouettes are used to reconstruct the initial conservative estimate of the object’s shape and shadow carving is used to carve out the concavities. We have simulated our reconstruction system with a commercial rendering package to explore the design parameters and assess the accuracy of the reconstruction. We have also implemented our reconstruction scheme in a table-top system and present the results of scanning of several objects.",
"title": ""
},
{
"docid": "18e019622188ab6ddb2beca69d51e1c9",
"text": "The rhesus macaque (Macaca mulatta) is the most utilized primate model in the biomedical and psychological sciences. Expressive behavior is of interest to scientists studying these animals, both as a direct variable (modeling neuropsychiatric disease, where expressivity is a primary deficit), as an indirect measure of health and welfare, and also in order to understand the evolution of communication. Here, intramuscular electrical stimulation of facial muscles was conducted in the rhesus macaque in order to document the relative contribution of each muscle to the range of facial movements and to compare the expressive function of homologous muscles in humans, chimpanzees and macaques. Despite published accounts that monkeys possess less differentiated and less complex facial musculature, the majority of muscles previously identified in humans and chimpanzees were stimulated successfully in the rhesus macaque and caused similar appearance changes. These observations suggest that the facial muscular apparatus of the monkey has extensive homology to the human face. The muscles of the human face, therefore, do not represent a significant evolutionary departure from those of a monkey species. Thus, facial expressions can be compared between humans and rhesus macaques at the level of the facial musculature, facilitating the systematic investigation of comparative facial communication.",
"title": ""
},
{
"docid": "87f0a390580c452d77fcfc7040352832",
"text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:",
"title": ""
},
{
"docid": "a1c534ca8925ccfed04b21a92263b9d7",
"text": "In the last few decades, Structure from Motion (SfM) and visual Simultaneous Localization and Mapping (visual SLAM) techniques have gained significant interest from both the computer vision and robotic communities. Many variants of these techniques have started to make an impact in a wide range of applications, including robot navigation and augmented reality. However, despite some remarkable results in these areas, most SfM and visual SLAM techniques operate based on the assumption that the observed environment is static. However, when faced with moving objects, overall system accuracy can be jeopardized. In this article, we present for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments. We identify three main problems: how to perform reconstruction (robust visual SLAM), how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction. Based on this categorization, we provide a comprehensive taxonomy of existing approaches. Finally, the advantages and disadvantages of each solution class are critically discussed from the perspective of practicality and robustness.",
"title": ""
},
{
"docid": "e60622f175cb091537f3a1a2cb2550ae",
"text": "Non-differentiable and constrained optimization play a key role in machine learning, signal and image processing, communications, and beyond. For high-dimensional minimization problems involving large datasets or many unknowns, the forward-backward splitting method (also known as the proximal gradient method) provides a simple, yet practical solver. Despite its apparent simplicity, the performance of the forward-backward splitting method is highly sensitive to implementation details. This article provides an introductory review of forward-backward splitting with a special emphasis on practical implementation aspects. In particular, issues like stepsize selection, acceleration, stopping conditions, and initialization are considered. Numerical experiments are used to compare the effectiveness of different approaches. Many variations of forward-backward splitting are implemented in a new solver called FASTA (short for Fast Adaptive Shrinkage/Thresholding Algorithm). FASTA provides a simple interface for applying forward-backward splitting to a broad range of problems appearing in sparse recovery, logistic regression, multiple measurement vector (MMV) problems, democratic representations, 1-bit matrix completion, total-variation (TV) denoising, phase retrieval, as well as non-negative matrix factorization.",
"title": ""
},
{
"docid": "eb86266b6f2a6c5bddece58d2ea6121a",
"text": "Adoptive immunotherapy, or the infusion of lymphocytes, is a promising approach for the treatment of cancer and certain chronic viral infections. The application of the principles of synthetic biology to enhance T cell function has resulted in substantial increases in clinical efficacy. The primary challenge to the field is to identify tumor-specific targets to avoid off-tumor, on-target toxicity. Given recent advances in efficacy in numerous pilot trials, the next steps in clinical development will require multicenter trials to establish adoptive immunotherapy as a mainstream technology.",
"title": ""
},
{
"docid": "067ec456d76cce7978b3d2f0c67269ed",
"text": "With the development of deep learning, the performance of hyperspectral image (HSI) classification has been greatly improved in recent years. The shortage of training samples has become a bottleneck for further improvement of performance. In this paper, we propose a novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data. Firstly, the spectral-spatial feature is extracted from a target pixel and its neighbors. Then, a number of one-dimensional feature maps, obtained by convolution operation on spectral-spatial features, are stacked into a two-dimensional matrix. Finally, the two-dimensional matrix considered as an image is fed into standard CNN. This is why we call it HSI-CNN. In addition, we also implements two depth network classification models, called HSI-CNN+XGBoost and HSI-CapsNet, in order to compare the performance of our framework. Experiments show that the performance of hyperspectral image classification is improved efficiently with HSI-CNN framework. We evaluate the model's performance using four popular HSI datasets, which are the Kennedy Space Center (KSC), Indian Pines (IP), Pavia University scene (PU) and Salinas scene (SA). As far as we concerned, the accuracy of HSI-CNN has kept pace with the state-of-art methods, which is 99.28%, 99.09%, 99.57%, 98.97% separately.",
"title": ""
},
{
"docid": "2f9d5235bac1d8b3a9c26cd00e843fb9",
"text": "K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.",
"title": ""
},
{
"docid": "1c17535a4f1edc36b698295136e9711a",
"text": "Massive digital acquisition and preservation of deteriorating historical and artistic documents is of particular importance due to their value and fragile condition. The study and browsing of such digital libraries is invaluable for scholars in the Cultural Heritage field but requires automatic tools for analyzing and indexing these datasets. We present two completely automatic methods requiring no human intervention: text height estimation and text line extraction. Our proposed methods have been evaluated on a huge heterogeneous corpus of illuminated medieval manuscripts of different writing styles and with various problematic attributes, such as holes, spots, ink bleed-through, ornamentation, background noise, and overlapping text lines. Our experimental results demonstrate that these two new methods are efficient and reliable, even when applied to very noisy and damaged old handwritten manuscripts.",
"title": ""
},
{
"docid": "3dfd3093b6abb798474dec6fb9cfca36",
"text": "This paper proposes a new image representation for texture categorization, which is based on extension of local binary patterns (LBP). As we know LBP can achieve effective description ability with appearance invariance and adaptability of patch matching based methods. However, LBP only thresholds the differential values between neighborhood pixels and the focused one to 0 or 1, which is very sensitive to noise existing in the processed image. This study extends LBP to local ternary patterns (LTP), which considers the differential values between neighborhood pixels and the focused one as negative or positive stimulus if the absolute differential value is large; otherwise no stimulus (set as 0). With the ternary values of all neighbored pixels, we can achieve a pattern index for each local patch, and then extract the pattern histogram for image representation. Experiments on two texture datasets: Brodats32 and KTH TIPS2-a validate that the robust LTP can achieve much better performances than the conventional LBP and the state-of-the-art methods.",
"title": ""
},
{
"docid": "824bcc0f9f4e71eb749a04f441891200",
"text": "We characterize the singular values of the linear transformation associated with a convolution applied to a two-dimensional feature map with multiple channels. Our characterization enables efficient computation of the singular values of convolutional layers used in popular deep neural network architectures. It also leads to an algorithm for projecting a convolutional layer onto the set of layers obeying a bound on the operator norm of the layer. We show that this is an effective regularizer; periodically applying these projections during training improves the test error of a residual network on CIFAR-10 from 6.2% to 5.3%.",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "7677f90e0d949488958b27422bdffeb5",
"text": "This vignette is a slightly modified version of Koenker (2008a). It was written in plain latex not Sweave, but all data and code for the examples described in the text are available from either the JSS website or from my webpages. Quantile regression for censored survival (duration) data offers a more flexible alternative to the Cox proportional hazard model for some applications. We describe three estimation methods for such applications that have been recently incorporated into the R package quantreg: the Powell (1986) estimator for fixed censoring, and two methods for random censoring, one introduced by Portnoy (2003), and the other by Peng and Huang (2008). The Portnoy and Peng-Huang estimators can be viewed, respectively, as generalizations to regression of the Kaplan-Meier and NelsonAalen estimators of univariate quantiles for censored observations. Some asymptotic and simulation comparisons are made to highlight advantages and disadvantages of the three methods.",
"title": ""
}
] |
scidocsrr
|
2e0a098a2a27c006428763e1956f57c6
|
Intelligent defense using pretense against targeted attacks in cloud platforms
|
[
{
"docid": "1cc68f148aef0aa8f6e38b613bb1cc55",
"text": "Conventional network defense tools such as intrusion detection systems and anti-virus focus on the vulnerability component of risk, and traditional incident response methodology presupposes a successful intrusion. An evolution in the goals and sophistication of computer network intrusions has rendered these approaches insufficient for certain actors. A new class of threats, appropriately dubbed the “Advanced Persistent Threat” (APT), represents well-resourced and trained adversaries that conduct multi-year intrusion campaigns targeting highly sensitive economic, proprietary, or national security information. These adversaries accomplish their goals using advanced tools and techniques designed to defeat most conventional computer network defense mechanisms. Network defense techniques which leverage knowledge about these adversaries can create an intelligence feedback loop, enabling defenders to establish a state of information superiority which decreases the adversary’s likelihood of success with each subsequent intrusion attempt. Using a kill chain model to describe phases of intrusions, mapping adversary kill chain indicators to defender courses of action, identifying patterns that link individual intrusions into broader campaigns, and understanding the iterative nature of intelligence gathering form the basis of intelligence-driven computer network defense (CND). Institutionalization of this approach reduces the likelihood of adversary success, informs network defense investment and resource prioritization, and yields relevant metrics of performance and effectiveness. The evolution of advanced persistent threats necessitates an intelligence-based model because in this model the defenders mitigate not just vulnerability, but also the threat component of risk.",
"title": ""
}
] |
[
{
"docid": "6bbcbe9f4f4ede20d2b86f6da9167110",
"text": "Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.",
"title": ""
},
{
"docid": "30fa14e4cfa8e33d863295c4f14ee671",
"text": "Approximate computing can decrease the design complexity with an increase in performance and power efficiency for error resilient applications. This brief deals with a new design approach for approximation of multipliers. The partial products of the multiplier are altered to introduce varying probability terms. Logic complexity of approximation is varied for the accumulation of altered partial products based on their probability. The proposed approximation is utilized in two variants of 16-bit multipliers. Synthesis results reveal that two proposed multipliers achieve power savings of 72% and 38%, respectively, compared to an exact multiplier. They have better precision when compared to existing approximate multipliers. Mean relative error figures are as low as 7.6% and 0.02% for the proposed approximate multipliers, which are better than the previous works. Performance of the proposed multipliers is evaluated with an image processing application, where one of the proposed models achieves the highest peak signal to noise ratio.",
"title": ""
},
{
"docid": "471c52fca57c672267ef69e3e3db9cd9",
"text": "This paper presents the approach of extending cellular networks with millimeter-wave backhaul and access links. Introducing a logical split between control and user plane will permit full coverage while seamlessly achieving very high data rates in the vicinity of mm-wave small cells.",
"title": ""
},
{
"docid": "3a5bfbf84d3a709a4e953da3d6bdc1a0",
"text": "Traditional (univariate) analysis of functional MRI (fMRI) data relies exclusively on the information contained in the time course of individual voxels. Multivariate analyses can take advantage of the information contained in activity patterns across space, from multiple voxels. Such analyses have the potential to greatly expand the amount of information extracted from fMRI data sets. In the present study, multivariate statistical pattern recognition methods, including linear discriminant analysis and support vector machines, were used to classify patterns of fMRI activation evoked by the visual presentation of various categories of objects. Classifiers were trained using data from voxels in predefined regions of interest during a subset of trials for each subject individually. Classification of subsequently collected fMRI data was attempted according to the similarity of activation patterns to prior training examples. Classification was done using only small amounts of data (20 s worth) at a time, so such a technique could, in principle, be used to extract information about a subject's percept on a near real-time basis. Classifiers trained on data acquired during one session were equally accurate in classifying data collected within the same session and across sessions separated by more than a week, in the same subject. Although the highest classification accuracies were obtained using patterns of activity including lower visual areas as input, classification accuracies well above chance were achieved using regions of interest restricted to higher-order object-selective visual areas. In contrast to typical fMRI data analysis, in which hours of data across many subjects are averaged to reveal slight differences in activation, the use of pattern recognition methods allows a subtle 10-way discrimination to be performed on an essentially trial-by-trial basis within individuals, demonstrating that fMRI data contain far more information than is typically appreciated.",
"title": ""
},
{
"docid": "b113d45660629847afbd7faade1f3a71",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.",
"title": ""
},
{
"docid": "af6b0d1f5f3938c0912dccbe43a4a88b",
"text": "The mean body size of limnetic cladocerans decreases from cold temperate to tropical regions, in both the northern and the southern hemisphere. This size shift has been attributed to both direct (e.g. physiological) or indirect (especially increased predation) impacts. To provide further information on the role of predation, we compiled results from several studies of subtropical Uruguayan lakes using three different approaches: (i) field observations from two lakes with contrasting fish abundance, Lakes Rivera and Rodó, (ii) fish exclusion experiments conducted in in-lake mesocosms in three lakes, and (iii) analyses of the Daphnia egg bank in the surface sediment of eighteen lakes. When fish predation pressure was low due to fish kills in Lake Rivera, large-bodied Daphnia appeared. In contrast, small-sized cladocerans were abundant in Lake Rodó, which exhibited a typical high abundance of fish. Likewise, relatively large cladocerans (e.g. Daphnia and Simocephalus) appeared in fishless mesocosms after only 2 weeks, most likely hatched from resting egg banks stored in the surface sediment, but their abundance declined again after fish stocking. Moreover, field studies showed that 9 out of 18 Uruguayan shallow lakes had resting eggs of Daphnia in their surface sediment despite that this genus was only recorded in three of the lakes in summer water samples, indicating that Daphnia might be able to build up populations at low risk of predation. Our results show that medium and large-sized zooplankton can occur in subtropical lakes when fish predation is removed. The evidence provided here collectively confirms the hypothesis that predation, rather than high-temperature induced physiological constraints, is the key factor determining the dominance of small-sized zooplankton in warm lakes.",
"title": ""
},
{
"docid": "6cb49a604153acbf516bc0c7bd93f922",
"text": "Reliable and quick response fault diagnosis is crucial for the wind turbine generator system (WTGS) to avoid unplanned interruption and to reduce the maintenance cost. However, the conditional data generated from WTGS operating in a tough environment is always dynamical and high-dimensional. To address these challenges, we propose a new fault diagnosis scheme which is composed of multiple extreme learning machines (ELM) in a hierarchical structure, where a forwarding list of ELM layers is concatenated and each of them is processed independently for its corresponding role. The framework enables both representational feature learning and fault classification. The multi-layered ELM based representational learning covers functions including data preprocessing, feature extraction and dimension reduction. An ELM based autoencoder is trained to generate a hidden layer output weight matrix, which is then used to transform the input dataset into a new feature representation. Compared with the traditional feature extraction methods which may empirically wipe off some “insignificant’ feature information that in fact conveys certain undiscovered important knowledge, the introduced representational learning method could overcome the loss of information content. The computed output weight matrix projects the high dimensional input vector into a compressed and orthogonally weighted distribution. The last single layer of ELM is applied for fault classification. Unlike the greedy layer wise learning method adopted in back propagation based deep learning (DL), the proposed framework does not need iterative fine-tuning of parameters. To evaluate its experimental performance, comparison tests are carried out on a wind turbine generator simulator. The results show that the proposed diagnostic framework achieves the best performance among the compared approaches in terms of accuracy and efficiency in multiple faults detection of wind turbines.",
"title": ""
},
{
"docid": "5f63aa64d24dcb011db3dc2604af5e73",
"text": "Communication aimed at promoting civic engagement may become problematic when citizen roles undergo historic changes. In the current era, younger generations are embracing more expressive styles of actualizing citizenship defined around peer content sharing and social media, in contrast to earlier models of dutiful citizenship based on one-way communication managed by authorities. An analysis of 90 youth Web sites operated by diverse civic and political organizations in the United States reveals uneven conceptions of citizenship and related civic skills, suggesting that many established organization are out of step with changing civic styles.",
"title": ""
},
{
"docid": "1c05d934574fe3d7f115863067a34b96",
"text": "We present EzPC: a secure two-party computation (2PC) framework that generates efficient 2PC protocols from high-level, easyto-write programs. EzPC provides formal correctness and security guarantees while maintaining performance and scalability. Previous language frameworks, such as CBMC-GC, ObliVM, SMCL, and Wysteria, generate protocols that use either arithmetic or boolean circuits exclusively. Our compiler is the first to generate protocols that combine both arithmetic sharing and garbled circuits for better performance. We empirically demonstrate that the protocols generated by our framework match or outperform (up to 19x) recent works that provide hand-crafted protocols for various functionalities such as secure prediction and matrix factorization.",
"title": ""
},
{
"docid": "421a0d89557ea20216e13dee9db317ca",
"text": "Online advertising is progressively moving towards a programmatic model in which ads are matched to actual interests of individuals collected as they browse the web. Letting the huge debate around privacy aside, a very important question in this area, for which little is known, is: How much do advertisers pay to reach an individual?\n In this study, we develop a first of its kind methodology for computing exactly that - the price paid for a web user by the ad ecosystem - and we do that in real time. Our approach is based on tapping on the Real Time Bidding (RTB) protocol to collect cleartext and encrypted prices for winning bids paid by advertisers in order to place targeted ads. Our main technical contribution is a method for tallying winning bids even when they are encrypted. We achieve this by training a model using as ground truth prices obtained by running our own \"probe\" ad-campaigns. We design our methodology through a browser extension and a back-end server that provides it with fresh models for encrypted bids. We validate our methodology using a one year long trace of 1600 mobile users and demonstrate that it can estimate a user's advertising worth with more than 82% accuracy.",
"title": ""
},
{
"docid": "a9069d75cd3b3c7e09d67b16bb52864d",
"text": "The content of this dissertation lies at the intersection of analysis and applications of PDE to image processing and computer vision applications. In the first part of this thesis, we propose e fficient and accurate algorithms for computing certain area preserving geometric motions of curves in the plane, such as area preserving motion by curvature. These schemes are based on a new class of di ffusion generated motion algorithms using signed distance functions. In particular, they alternate two very simple and fast operations, namely convolution with the Gaussian kernel and construction of the distance function, to generate the desired geometric flow in an unconditionally stable manner. We present applications of these area preserving flows to large scale simulations of coarsening, and inverse problems. In the second part of this dissertation, we study the discrete version of a family of illposed, nonlinear di ffusion equations of order 2 n. The fourth order ( n = 2) version of these equations constitutes our main motivation, as it appears prominently in image processing and computer vision literature. It was proposed by You and Kaveh as a model for denoising images while maintaining sharp object boundaries (edges). The second order equation (n = 1) corresponds to another famous model from image processing, namely Perona and Malik’s anisotropic di ffusion, and was studied in earlier papers. The equations studied in this paper are high order analogues of the Perona-Malik equation, and like the second order model, their continuum versions violate parabolicity and hence lack well-posedness theory. We follow a recent technique from Kohn and Otto, and prove a weak upper bound",
"title": ""
},
{
"docid": "cb3a4bc774e70e016df3f50cc205ca87",
"text": "A wide variety of electronic applications deal with the conditioning of small input signals. These systems require signal paths with very low offset voltage and low offset voltage drift over time and temperature. With standard linear components, the only way to achieve this is to use system-level auto-calibration. However, adding autocalibration requires more complicated hardware and software and can slow down time to market for new products. The alternative is to use components with low offset and low drift. The amplifiers with by far the lowest offset and drift available are the auto-zero amplifiers (AZAs). These amplifiers achieve high dc precision through a continuously running calibration mechanism that is implemented on-chip. With a typical input offset of 1 μV, a temperature-related drift of 20 nV/oC, and a long-term drift of 20 nV/month, these amplifiers satisfy even the highest requirements of dc accuracy. Today’s AZAs differ neither in form nor in the application from standard operational amplifiers. There is, however, some hesitation when it comes to using AZAs, as most engineers associate them with the older chopper amplifiers and chopper-stabilized amplifier designs. This stigma has been perpetuated either by engineers who worked with the older chopper amplifiers and remember the difficulties they had with them, or younger engineers who learned about chopper amplifiers in school but probably did not understand them very well. The original chopper amplifier heralded the beginning of the new era of self-calibrating amplifiers more than 50 years ago. This amplifier provided extreme low values for offset and drift, but its design was complicated and expensive. In addition, ac performance was limited to a few hertz of input bandwidth accompanied by a high level of output noise. Over the years, unfortunately, the term “chopper amplifier” became a synonym for any amplifier with internal calibration capability. Therefore, AZAs, often wrongly designated as chopper or chopper-stabilized amplifiers, are associated with the stigma of the older chopper technique. This article shows that the auto-zero calibration technique is very different from the chopper technique and is one that, when implemented through modern process technology, allows the economical manufacturing of wideband, high-precision amplifiers with low output noise. The following discussion presents the functional principles of the chopper amplifier, the chopper-stabilized amplifier, and the AZA. It then compares the efficiencies of low-frequency filtering when applied to AZAs and standard operational amplifiers. Finally, three application examples demonstrate the use of an AZA as a signal amplifier and as a calibrating amplifier in dc—and wideband ac—applications.",
"title": ""
},
{
"docid": "9eece0709b7df087f3ea1afcfa154c64",
"text": "This platform paper introduces a methodology for simulating an autonomous vehicle on open public roads. The paper outlines the technology and protocol needed for running these simulations, and describes an instance where the Real Road Autonomous Driving Simulator (RRADS) was used to evaluate 3 prototypes in a between-participant study design. 35 participants were interviewed at length before and after entering the RRADS. Although our study did not use overt deception---the consent form clearly states that a licensed driver is operating the vehicle---the protocol was designed to support suspension of disbelief. Several participants who did not read the consent form clearly strongly believed that they were interacting with a fully autonomous vehicle.\n The RRADS platform provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol deliberately using misdirection can gain ecologically valid reactions from study participants.",
"title": ""
},
{
"docid": "8143d59b02198a634c15d9f484f37d56",
"text": "The manufacturing industry is faced with strong competition making the companies’ knowledge resources and their systematic management a critical success factor. Yet, existing concepts for the management of process knowledge in manufacturing are characterized by major shortcomings. Particularly, they are either exclusively based on structured knowledge, e. g., formal rules, or on unstructured knowledge, such as documents, and they focus on isolated aspects of manufacturing processes. To address these issues, we present the Manufacturing Knowledge Repository, a holistic repository that consolidates structured and unstructured process knowledge to facilitate knowledge management and process optimization in manufacturing. First, we define requirements, especially the types of knowledge to be handled, e. g., data mining models and text documents. On this basis, we develop a conceptual repository data model associating knowledge items and process components such as machines and process steps. Furthermore, we discuss implementation issues including storage architecture variants and finally present both an evaluation of the data model and a proof of concept based on a prototypical implementation in a case example.",
"title": ""
},
{
"docid": "75591d4da0b01f1890022b320cdab705",
"text": "Many lakes in boreal and arctic regions have high concentrations of CDOM (coloured dissolved organic matter). Remote sensing of such lakes is complicated due to very low water leaving signals. There are extreme (black) lakes where the water reflectance values are negligible in almost entire visible part of spectrum (400–700 nm) due to the absorption by CDOM. In these lakes, the only water-leaving signal detectable by remote sensing sensors occurs as two peaks—near 710 nm and 810 nm. The first peak has been widely used in remote sensing of eutrophic waters for more than two decades. We show on the example of field radiometry data collected in Estonian and Swedish lakes that the height of the 810 nm peak can also be used in retrieving water constituents from remote sensing data. This is important especially in black lakes where the height of the 710 nm peak is still affected by CDOM. We have shown that the 810 nm peak can be used also in remote sensing of a wide variety of lakes. The 810 nm peak is caused by combined effect of slight decrease in absorption by water molecules and backscattering from particulate material in the water. Phytoplankton was the dominant particulate material in most of the studied lakes. Therefore, the height of the 810 peak was in good correlation with all proxies of phytoplankton biomass—chlorophyll-a (R2 = 0.77), total suspended matter (R2 = 0.70), and suspended particulate organic matter (R2 = 0.68). There was no correlation between the peak height and the suspended particulate inorganic matter. Satellite sensors with sufficient spatial and radiometric resolution for mapping lake water quality (Landsat 8 OLI and Sentinel-2 MSI) were launched recently. In order to test whether these satellites can capture the 810 nm peak we simulated the spectral performance of these two satellites from field radiometry data. Actual satellite imagery from a black lake was also used to study whether these sensors can detect the peak despite their band configuration. Sentinel 2 MSI has a nearly perfectly positioned band at 705 nm to characterize the 700–720 nm peak. We found that the MSI 783 nm band can be used to detect the 810 nm peak despite the location of this band is not in perfect to capture the peak.",
"title": ""
},
{
"docid": "7fd5f3461742db10503dd5e3d79fe3ed",
"text": "There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.",
"title": ""
},
{
"docid": "5998ce035f4027c6713f20f8125ec483",
"text": "As the use of automotive radar increases, performance limitations associated with radar-to-radar interference will become more significant. In this paper, we employ tools from stochastic geometry to characterize the statistics of radar interference. Specifically, using two different models for the spatial distributions of vehicles, namely, a Poisson point process and a Bernoulli lattice process, we calculate for each case the interference statistics and obtain analytical expressions for the probability of successful range estimation. This paper shows that the regularity of the geometrical model appears to have limited effect on the interference statistics, and so it is possible to obtain tractable tight bounds for the worst case performance. A technique is proposed for designing the duty cycle for the random spectrum access, which optimizes the total performance. This analytical framework is verified using Monte Carlo simulations.",
"title": ""
},
{
"docid": "cc5126ea8a6f9ebca587970377966067",
"text": "In this paper reliability model of the converter valves in VSC-HVDC system is analyzed. The internal structure and functions of converter valve are presented. Taking the StakPak IGBT from ABB Semiconductors for example, the mathematical reliability model for converter valve and its sub-module is established. By means of calculation and analysis, the reliability indices of converter valve under various voltage classes and redundancy designs are obtained, and then optimal redundant scheme is chosen. KeywordsReliability Analysis; VSC-HVDC; Converter Valve",
"title": ""
},
{
"docid": "4ec7480aeb1b3193d760d554643a1660",
"text": "The ability to learn is arguably the most crucial aspect of human intelligence. In reinforcement learning, we attempt to formalize a certain type of learning that is based on rewards and penalties. These supervisory signals should guide an agent to learn optimal behavior. In particular, this research focuses on deep reinforcement learning, where the agent should learn to play video games solely from pixel input. This thesis contributes to deep reinforcement learning research by assessing several variations to an existing state-of-the-art algorithm. First, we provide an extensive analysis on how the design decisions of the agent’s deep neural network affect its performance. Second, we introduce a novel neural layer that allows for local specializations in the visual input of the agents, as opposed to the global weight sharing that occurs in convolutional layers. Third, we introduce a ‘what’ and ‘where’ neural network architecture, inspired by the information flow of the visual cortical areas in the human brain. Finally, we explore prototype based deep reinforcement learning by introducing a novel output layer that is largely inspired by learning vector quantization. In a subset of our experiments, we show substantial improvements compared to existing alternatives.",
"title": ""
},
{
"docid": "33cf6c26de09c7772a529905d9fa6b5c",
"text": "Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation.\n This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.",
"title": ""
}
] |
scidocsrr
|
7f360b9a8631c00f477628c509eb4820
|
Cloud IoT Based Greenhouse Monitoring System
|
[
{
"docid": "1a101ae3faeaa775737799c2324ef603",
"text": "in recent years, greenhouse technology in agriculture is to automation, information technology direction with the IOT (internet of things) technology rapid development and wide application. In the paper, control networks and information networks integration of IOT technology has been studied based on the actual situation of agricultural production. Remote monitoring system with internet and wireless communications combined is proposed. At the same time, taking into account the system, information management system is designed. The collected data by the system provided for agricultural research facilities.",
"title": ""
},
{
"docid": "e6021af3cb62968b290a750ec5d8b6bd",
"text": "This paper mainly focuses on the controlling of hom e appliances remotely and providing security when the user is away from the place. The system is SMS based and uses wireless technology to revolutionize the standards of living. This system provides ideal solution to the problems faced by home owners in daily life. The system is wireless t herefore more adaptable and cost-effective. The HACS system provides security against intrusion as well as automates various home appliances using SMS. The system uses GSM technology thu s providing ubiquitous access to the system for security and automated appliance control.",
"title": ""
}
] |
[
{
"docid": "9ad8a5b73430e4fe6b86d5fb8e2412b0",
"text": "We apply coset codes to adaptive modulation in fading channels. Adaptive modulation is a powerful technique to improve the energy efficiency and increase the data rate over a fading channel. Coset codes are a natural choice to use with adaptive modulation since the channel coding and modulation designs are separable. Therefore, trellis and lattice codes designed for additive white Gaussian noise (AWGN) channels can be superimposed on adaptive modulation for fading channels, with the same approximate coding gains. We first describe the methodology for combining coset codes with a general class of adaptive modulation techniques. We then apply this methodology to a spectrally efficient adaptive M -ary quadrature amplitude modulation (MQAM) to obtain trellis-coded adaptive MQAM. We present analytical and simulation results for this design which show an effective coding gain of 3 dB relative to uncoded adaptive MQAM for a simple four-state trellis code, and an effective 3.6-dB coding gain for an eight-state trellis code. More complex trellis codes are shown to achieve higher gains. We also compare the performance of trellis-coded adaptive MQAM to that of coded modulation with built-in time diversity and fixed-rate modulation. The adaptive method exhibits a power savings of up to 20 dB.",
"title": ""
},
{
"docid": "f15cb62cb81b71b063d503eb9f44d7c5",
"text": "This study presents an improved krill herd (IKH) approach to solve global optimization problems. The main improvement pertains to the exchange of information between top krill during motion calculation process to generate better candidate solutions. Furthermore, the proposed IKH method uses a new Lévy flight distribution and elitism scheme to update the KH motion calculation. This novel meta-heuristic approach can accelerate the global convergence speed while preserving the robustness of the basic KH algorithm. Besides, the detailed implementation procedure for the IKH method is described. Several standard benchmark functions are used to verify the efficiency of IKH. Based on the results, the performance of IKH is superior to or highly competitive with the standard KH and other robust population-based optimization methods. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7f3686b783273c4df7c4fb41fe7ccefd",
"text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e37c560150a94947117d7c796af73469",
"text": "For many players in financial markets, the price impact of their trading activity represents a large proportion of their transaction costs. This paper proposes a novel machine learning method for predicting the price impact of order book events. Specifically, we introduce a prediction system based on performance weighted ensembles of random forests. The system's performance is benchmarked using ensembles of other popular regression algorithms including: liner regression, neural networks and support vector regression using depth-of-book data from the BATS Chi-X exchange. The results show that recency-weighted ensembles of random forests produce over 15% greater prediction accuracy on out-of-sample data, for 5 out of 6 timeframes studied, compared with all benchmarks.",
"title": ""
},
{
"docid": "6f973565132ed9a535551ca7ec78086d",
"text": "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",
"title": ""
},
{
"docid": "599f4afe379a877e324547e09033465d",
"text": "Large-scale graph analytics is a central tool in many fields, and exemplifies the size and complexity of Big Data applications. Recent distributed graph processing frameworks utilize the venerable Bulk Synchronous Parallel (BSP) model and promise scalability for large graph analytics. This has been made popular by Google's Pregel, which provides an architecture design for BSP graph processing. Public clouds offer democratized access to medium-sized compute infrastructure with the promise of rapid provisioning with no capital investment. Evaluating BSP graph frameworks on cloud platforms with their unique constraints is less explored. Here, we present optimizations and analyses for computationally complex graph analysis algorithms such as betweenness-centrality and all-pairs shortest paths on a native BSP framework we have developed for the Microsoft Azure Cloud, modeled on the Pregel graph processing model. We propose novel heuristics for scheduling graph vertex processing in swaths to maximize resource utilization on cloud VMs that lead to a 3.5x performance improvement. We explore the effects of graph partitioning in the context of BSP, and show that even a well partitioned graph may not lead to performance improvements due to BSP's barrier synchronization. We end with a discussion on leveraging cloud elasticity for dynamically scaling the number of BSP workers to achieve a better performance than a static deployment, and at a significantly lower cost.",
"title": ""
},
{
"docid": "18e75ca50be98af1d5a6a2fd22b610d3",
"text": "We propose a new type of saliency—context-aware saliency—which aims at detecting the image regions that represent the scene. This definition differs from previous definitions whose goal is to either identify fixation points or detect the dominant object. In accordance with our saliency definition, we present a detection algorithm which is based on four principles observed in the psychological literature. The benefits of the proposed approach are evaluated in two applications where the context of the dominant objects is just as essential as the objects themselves. In image retargeting, we demonstrate that using our saliency prevents distortions in the important regions. In summarization, we show that our saliency helps to produce compact, appealing, and informative summaries.",
"title": ""
},
{
"docid": "b9c2db4d1b90f68833581585596144a2",
"text": "The Internet of things (IoT) is a next generation of Internet connected embedded ICT systems in a digital environment to seamlessly integrate supply chain and logistics processes. Integrating emerging IoT into the current ICT systems can be unique because of its intelligence, autonomous and pervasive applications. However, research on the IoT adoption in supply chain domain is scarce and acceptance of the IoT into the retail services in specific has been overly rhetoric. This study is drawn upon the organisational capability theory for developing an empirical model considering the effect of IoT capabilities on multiple dimensions of supply chain process integration, and in turn improves supply chain performance as well as organisational performance. Cross-sectional survey data from 227 Australian retail firms was analysed using structural equation modelling (SEM). The results indicate that IoT capability has a positive and significant effect on internal, customer-, and supplier-related process integration that in turn positively affects supply chain performance and organisational performance. Theoretically, the study contributes to a body of knowledge that integrates information systems research into supply chain integration by establishing an empirical evidence of how IoT-enabled process integration can enhance the performance at both supply chain and organisational level. Practically, the results inform the managers of the likely investment on IoT that can lead to chain’s performance outcome.",
"title": ""
},
{
"docid": "d88059813c4064ec28c58a8ab23d3030",
"text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.",
"title": ""
},
{
"docid": "eead6bfbb549a809046536f7d4b8acbd",
"text": "With the advent of numerous community forums, tasks associated with the same have gained importance in the recent past. With the influx of new questions every day on these forums, the issues of identifying methods to find answers to said questions, or even trying to detect duplicate questions, are of practical importance and are challenging in their own right. This paper aims at surveying some of the aforementioned issues, and methods proposed for tackling the same.",
"title": ""
},
{
"docid": "03ff1bdb156c630add72357005a142f5",
"text": "Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. Stateof-the-art methods enable the real-time creation of a forged version of a single video obtained from a social network. Although numerous methods have been developed for detecting forged images and videos, they are generally targeted at certain domains and quickly become obsolete as new kinds of attacks appear. The method introduced in this paper uses a capsule network to detect various kinds of spoofs, from replay attacks using printed images or recorded videos to computergenerated videos using deep convolutional neural networks. It extends the application of capsule networks beyond their original intention to the solving of inverse graphics problems.",
"title": ""
},
{
"docid": "45bf73a93f0014820864d1805f257bfc",
"text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.",
"title": ""
},
{
"docid": "cf7af6838ae725794653bfce39c609b8",
"text": "This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence vectorization strategy, network depth and the deep feature to predict for image to sentence matching. We also generalize Word2VisualVec for matching a video to a sentence, by extending the predictive abilities to 3-D ConvNet features as well as a visual-audio representation. Experiments on four challenging image and video benchmarks detail Word2VisualVec’s properties, capabilities for image and video to sentence matching, and on all datasets its state-of-the-art results.",
"title": ""
},
{
"docid": "e0155b21837e87dd1c7bb01635d042e9",
"text": "The purpose of this paper is to provide the reader with an extensive technical analysis and review of the book, \"Multi agent Systems: A Modern Approach to Distributed Artificial Intelligence\" by Gerhard Weiss. Due to the complex nature of the topic of distributed artificial intelligence (DAT) and multi agent systems (MAS), this paper has been divided into two major segments: an overview of field and book analysis. The first section of the paper provides the reader with background information about the topic of DAT and MAS, which not only introduces the reader to the field but also assists the reader to comprehend the essential themes in such a complex field. On the other hand, the second portion of the paper provides the reader with a comprehensive review of the book from the viewpoint of a senior computer science student with an introductory knowledge of the field of artificial intelligence.",
"title": ""
},
{
"docid": "16c05466aa84e1704b528ccac34a4004",
"text": "Most cloud services are built with multi-tenancy which enables data and configuration segregation upon shared infrastructures. Each tenant essentially operates in an individual silo without interacting with other tenants. As cloud computing evolves we anticipate there will be increased need for tenants to collaborate across tenant boundaries. This will require cross-tenant trust models supported and enforced by the cloud service provider. Considering the on-demand self-service feature intrinsic to cloud computing, we propose a formal cross-tenant trust model (CTTM) and its role-based extension (RB-CTTM) integrating various types of trust relations into cross-tenant access control models which can be enforced by the multi-tenant authorization as a service (MTAaaS) platform in the cloud.",
"title": ""
},
{
"docid": "371c3b72d33c17080968e65f1a24787d",
"text": "Bullying and cyberbullying have serious consequences for all those involved, especially the victims, and its prevalence is high throughout all the years of schooling, which emphasizes the importance of prevention. This article describes an intervention proposal, made up of a program (Cyberprogram 2.0 Garaigordobil and Martínez-Valderrey, 2014a) and a videogame (Cooperative Cybereduca 2.0 Garaigordobil and Martínez-Valderrey, 2016b) which aims to prevent and reduce cyberbullying during adolescence and which has been validated experimentally. The proposal has four objectives: (1) To know what bullying and cyberbullying are, to reflect on the people involved in these situations; (2) to become aware of the harm caused by such behaviors and the severe consequences for all involved; (3) to learn guidelines to prevent and deal with these situations: know what to do when one suffers this kind of violence or when observing that someone else is suffering it; and (4) to foster the development of social and emotional factors that inhibit violent behavior (e.g., communication, ethical-moral values, empathy, cooperation…). The proposal is structured around 25 activities to fulfill these goals and it ends with the videogame. The activities are carried out in the classroom, and the online video is the last activity, which represents the end of the intervention program. The videogame (www.cybereduca.com) is a trivial pursuit game with questions and answers related to bullying/cyberbullying. This cybernetic trivial pursuit is organized around a fantasy story, a comic that guides the game. The videogame contains 120 questions about 5 topics: cyberphenomena, computer technology and safety, cybersexuality, consequences of bullying/cyberbullying, and coping with bullying/cyberbullying. To evaluate the effectiveness of the intervention, a quasi-experimental design, with repeated pretest-posttest measures and control groups, was used. During the pretest and posttest stages, 8 assessment instruments were administered. The experimental group randomly received the intervention proposal, which consisted of one weekly 1-h session during the entire school year. The results obtained with the analyses of variance of the data collected before and after the intervention in the experimental and control groups showed that the proposal significantly promoted the following aspects in the experimental group: (1) a decrease in face-to-face bullying and cyberbullying behaviors, in different types of school violence, premeditated and impulsive aggressiveness, and in the use of aggressive conflict-resolution strategies; and (2) an increase of positive social behaviors, self-esteem, cooperative conflict-resolution strategies, and the capacity for empathy. The results provide empirical evidence for the proposal. The importance of implementing programs to prevent bullying in all its forms, from the beginning of schooling and throughout formal education, is discussed.",
"title": ""
},
{
"docid": "a10da2542efd44725a7ca499bd7019d3",
"text": "During growth on fermentable substrates, such as glucose, pyruvate, which is the end-product of glycolysis, can be used to generate acetyl-CoA in the cytosol via acetaldehyde and acetate, or in mitochondria by direct oxidative decarboxylation. In the latter case, the mitochondrial pyruvate carrier (MPC) is responsible for pyruvate transport into mitochondrial matrix space. During chronological aging, yeast cells which lack the major structural subunit Mpc1 display a reduced lifespan accompanied by an age-dependent loss of autophagy. Here, we show that the impairment of pyruvate import into mitochondria linked to Mpc1 loss is compensated by a flux redirection of TCA cycle intermediates through the malic enzyme-dependent alternative route. In such a way, the TCA cycle operates in a \"branched\" fashion to generate pyruvate and is depleted of intermediates. Mutant cells cope with this depletion by increasing the activity of glyoxylate cycle and of the pathway which provides the nucleocytosolic acetyl-CoA. Moreover, cellular respiration decreases and ROS accumulate in the mitochondria which, in turn, undergo severe damage. These acquired traits in concert with the reduced autophagy restrict cell survival of the mpc1∆ mutant during chronological aging. Conversely, the activation of the carnitine shuttle by supplying acetyl-CoA to the mitochondria is sufficient to abrogate the short-lived phenotype of the mutant.",
"title": ""
},
{
"docid": "c9aa8454246e983e9aa2752bfa667f43",
"text": "BACKGROUND\nADHD is diagnosed and treated more often in males than in females. Research on gender differences suggests that girls may be consistently underidentified and underdiagnosed because of differences in the expression of the disorder among boys and girls. One aim of the present study was to assess in a clinical sample of medication naïve boys and girls with ADHD, whether there were significant gender x diagnosis interactions in co-existing symptom severity and executive function (EF) impairment. The second aim was to delineate specific symptom ratings and measures of EF that were most important in distinguishing ADHD from healthy controls (HC) of the same gender.\n\n\nMETHODS\nThirty-seven females with ADHD, 43 males with ADHD, 18 HC females and 32 HC males between 8 and 17 years were included. Co-existing symptoms were assessed with self-report scales and parent ratings. EF was assessed with parent ratings of executive skills in everyday situations (BRIEF), and neuropsychological tests. The three measurement domains (co-existing symptoms, BRIEF, neuropsychological EF tests) were investigated using analysis of variance (ANOVA) and random forest classification.\n\n\nRESULTS\nANOVAs revealed only one significant diagnosis x gender interaction, with higher rates of self-reported anxiety symptoms in females with ADHD. Random forest classification indicated that co-existing symptom ratings was substantially better in distinguishing subjects with ADHD from HC in females (93% accuracy) than in males (86% accuracy). The most important distinguishing variable was self-reported anxiety in females, and parent ratings of rule breaking in males. Parent ratings of EF skills were better in distinguishing subjects with ADHD from HC in males (96% accuracy) than in females (92% accuracy). Neuropsychological EF tests had only a modest ability to categorize subjects as ADHD or HC in males (73% accuracy) and females (79% accuracy).\n\n\nCONCLUSIONS\nOur findings emphasize the combination of self-report and parent rating scales for the identification of different comorbid symptom expression in boys and girls already diagnosed with ADHD. Self-report scales may increase awareness of internalizing problems particularly salient in females with ADHD.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
},
{
"docid": "3dfd31873c3d13e8e55a9e0c5bc6ed7c",
"text": "Apache Spark is an open source distributed data processing platform that uses distributed memory abstraction to process large volume of data efficiently. However, performance of a particular job on Apache Spark platform can vary significantly depending on the input data type and size, design and implementation of the algorithm, and computing capability, making it extremely difficult to predict the performance metric of a job such as execution time, memory footprint, and I/O cost. To address this challenge, in this paper, we present a simulation driven prediction model that can predict job performance with high accuracy for Apache Spark platform. Specifically, as Apache spark jobs are often consist of multiple sequential stages, the presented prediction model simulates the execution of the actual job by using only a fraction of the input data, and collect execution traces (e.g., I/O overhead, memory consumption, execution time) to predict job performance for each execution stage individually. We evaluated our prediction framework using four real-life applications on a 13 node cluster, and experimental results show that the model can achieve high prediction accuracy.",
"title": ""
}
] |
scidocsrr
|
ecdbe648e13cfa9e0faa5e4fc5d50a62
|
Squared Earth Mover ’ s Distance Loss for Training Deep Neural Networks on Ordered-Classes
|
[
{
"docid": "bf85db5489a61b5fca8d121de198be97",
"text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.",
"title": ""
},
{
"docid": "12531afcc6d9ecdec39adef0d0e6b391",
"text": "Convolutional Neural Networks (ConvNets) have successfully contributed to improve the accuracy of regression-based methods for computer vision tasks such as human pose estimation, landmark localization, and object detection. The network optimization has been usually performed with L2 loss and without considering the impact of outliers on the training process, where an outlier in this context is defined by a sample estimation that lies at an abnormal distance from the other training sample estimations in the objective space. In this work, we propose a regression model with ConvNets that achieves robustness to such outliers by minimizing Tukey's biweight function, an M-estimator robust to outliers, as the loss function for the ConvNet. In addition to the robust loss, we introduce a coarse-to-fine model, which processes input images of progressively higher resolutions for improving the accuracy of the regressed values. In our experiments, we demonstrate faster convergence and better generalization of our robust loss function for the tasks of human pose estimation and age estimation from face images. We also show that the combination of the robust loss function with the coarse-to-fine model produces comparable or better results than current state-of-the-art approaches in four publicly available human pose estimation datasets.",
"title": ""
}
] |
[
{
"docid": "6c12755ba2580d5d9b794b9a33c0304a",
"text": "A fundamental part of conducting cross-disciplinary web science research is having useful, high-quality datasets that provide value to studies across disciplines. In this paper, we introduce a large, hand-coded corpus of online harassment data. A team of researchers collaboratively developed a codebook using grounded theory and labeled 35,000 tweets. Our resulting dataset has roughly 15% positive harassment examples and 85% negative examples. This data is useful for training machine learning models, identifying textual and linguistic features of online harassment, and for studying the nature of harassing comments and the culture of trolling.",
"title": ""
},
{
"docid": "cd89e893b318dca4fbbe6c56d66efbda",
"text": "The structure of working memory and its development across the childhood years were investigated in children 4-15 years of age. The children were given multiple assessments of each component of the A. D. Baddeley and G. Hitch (1974) working memory model. Broadly similar linear functions characterized performance on all measures as a function of age. From 6 years onward, a model consisting of 3 distinct but correlated factors corresponding to the working memory model provided a good fit to the data. The results indicate that the basic modular structure of working memory is present from 6 years of age and possibly earlier, with each component undergoing sizable expansion in functional capacity throughout the early and middle school years to adolescence.",
"title": ""
},
{
"docid": "ad584a07befbfff1dff36c18ea830a4e",
"text": "In this paper, we review some of the novel emerging memory technologies and how they can enable energy-efficient implementation of large neuromorphic computing systems. We will highlight some of the key aspects of biological computation that are being mimicked in these novel nanoscale devices, and discuss various strategies employed to implement them efficiently. Though large scale learning systems have not been implemented using these devices yet, we will discuss the ideal specifications and metrics to be satisfied by these devices based on theoretical estimations and simulations. We also outline the emerging trends and challenges in the path towards successful implementations of large learning systems that could be ubiquitously deployed for a wide variety of cognitive computing tasks.",
"title": ""
},
{
"docid": "d952de00554b9a6bb21fbce802729b3f",
"text": "In the past five years there has been a dramatic increase in work on Search Based Software Engineering (SBSE), an approach to software engineering in which search based optimisation algorithms are used to address problems in Software Engineering. SBSE has been applied to problems throughout the Software Engineering lifecycle, from requirements and project planning to maintenance and re-engineering. The approach is attractive because it offers a suite of adaptive automated and semi-automated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This paper provides a review and classification of literature on SBSE. The paper identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.",
"title": ""
},
{
"docid": "69b78ff6fd67def0e0c1ee016630270b",
"text": "In the world of digitization, the growth of big data is raising at large scale with usage of high performance computing. The huge data in English and Hindi is available on internet and social media which need to be extracted or summarized in user required form. In this paper we are presenting Bilingual (Hindi and English) unsupervised automatic text summarization using deep learning. which is an important research area with in Natural Language Processing, Machine Learning and data mining, to improve result accuracy, we are using restricted Boltzmann machine to generate a shorter version of original document without losing its important information. In this algorithm we are exploring the features to improve the relevance of sentences in the dataset.",
"title": ""
},
{
"docid": "b5c8d34b75dbbfdeb666fd76ef524be7",
"text": "Systematic Literature Reviews (SLR) may not provide insight into the \"state of the practice\" in SE, as they do not typically include the \"grey\" (non-published) literature. A Multivocal Literature Review (MLR) is a form of a SLR which includes grey literature in addition to the published (formal) literature. Only a few MLRs have been published in SE so far. We aim at raising the awareness for MLRs in SE by addressing two research questions (RQs): (1) What types of knowledge are missed when a SLR does not include the multivocal literature in a SE field? and (2) What do we, as a community, gain when we include the multivocal literature and conduct MLRs? To answer these RQs, we sample a few example SLRs and MLRs and identify the missing and the gained knowledge due to excluding or including the grey literature. We find that (1) grey literature can give substantial benefits in certain areas of SE, and that (2) the inclusion of grey literature brings forward certain challenges as evidence in them is often experience and opinion based. Given these conflicting viewpoints, the authors are planning to prepare systematic guidelines for performing MLRs in SE.",
"title": ""
},
{
"docid": "affc8a21912d53a64e74284864600815",
"text": "plg /9 40 60 16 7 Ju n 94 Proceedings of the Twelfth National Conference on Arti cial Intelligence, 1994 Corpus-Driven Knowledge Acquisition for Discourse Analysis Stephen Soderland and Wendy Lehnert Department of Computer Science University of Massachusetts Amherst, MA 01003-4610 soderlan@cs.umass.edu lehnert@cs.umass.edu Abstract The availability of large on-line text corpora provides a natural and promising bridge between the worlds of natural language processing (NLP) and machine learning (ML). In recent years, the NLP community has been aggressively investigating statistical techniques to drive part-of-speech taggers, but application-speci c text corpora can be used to drive knowledge acquisition at much higher levels as well. In this paper we will show how ML techniques can be used to support knowledge acquisition for information extraction systems. It is often very di cult to specify an explicit domain model for many information extraction applications, and it is always labor intensive to implement hand-coded heuristics for each new domain. We have discovered that it is nevertheless possible to use ML algorithms in order to capture knowledge that is only implicitly present in a representative text corpus. Our work addresses issues traditionally associated with discourse analysis and intersentential inference generation, and demonstrates the utility of ML algorithms at this higher level of language analysis. The bene ts of our work address the portability and scalability of information extraction (IE) technologies. When hand-coded heuristics are used to manage discourse analysis in an information extraction system, months of programming e ort are easily needed to port a successful IE system to a new domain. We will show how ML algorithms can reduce this development time to a few days of automated corpus analysis without any resulting degradation of overall system performance. 1. Information Extraction at the Discourse Level All IE systems must operate at both the sentence level and the discourse level. At the sentence level, relevant information is extracted by a sentence analyzer according to pre-de ned domain guidelines. Recent performance evaluations sponsored by ARPA have shown This research was supported by NSF Grant no. EEC9209623, State/Industry/University Cooperative Research on Intelligent Information Retrieval. that a number of di erent parsing strategies can handle sentence-level information extraction with varying degrees of success (Lehnert and Sundheim 1991, Sundheim 1991). This paper will concentrate on the discourse level, by which we mean all processing that takes place after sentence analysis. Once information has been extracted locally from various text segments, the IE system must make a series of higher-level decisions before producing its nal output. Multiple referents must be merged when they are coreferent, important relationships between distinct referents must be recognized, and referents that are spurious with respect to the IE application must be discarded. To get a sense of the decisions involved in discourse, consider the following fragment of a text from the MUC-5 micro-electronics domain.",
"title": ""
},
{
"docid": "fcc434f43baae2cb1dbddd2f76fb9c7f",
"text": "For medical diagnoses and treatments, it is often desirable to wirelessly trace an object that moves inside the human body. A magnetic tracing technique suggested for such applications uses a small magnet as the excitation source, which does not require the power supply and connection wire. It provides good tracing accuracy and can be easily implemented. As the magnet moves, it establishes around the human body a static magnetic field, whose intensity is related to the magnet's 3-D position and 2-D orientation parameters. With magnetic sensors, these magnetic intensities can be detected in some predetermined spatial points, and the position and orientation parameters can be computed. Typically, a nonlinear optimization algorithm is applied to such a problem, but a linear algorithm is preferable for faster, more reliable computation, and lower complexity. In this paper, we propose a linear algorithm to determine the 5-D magnet's position and orientation parameters. With the data from five (or more) three-axis magnetic sensors, this algorithm results in a solution by the matrix and algebra computations. We applied this linear algorithm on the real localization system, and the results of simulations and real experiments show that satisfactory tracing accuracy can be achieved by using a sensor array with enough three-axis magnetic sensors.",
"title": ""
},
{
"docid": "0bc1c637d6f4334dd8a27491ebde40d6",
"text": "Osteoarthritis of the hip describes a clinical syndrome of joint pain accompanied by varying degrees of functional limitation and reduced quality of life. Osteoarthritis may not be progressive and most patients will not need surgery, with their symptoms adequately controlled by non-surgical measures. The treatment of hip osteoarthritis is aimed at reducing pain and stiffness and improving joint mobility. Total hip replacement remains the most effective treatment option but it is a major surgery with potential serious complications. NICE guideline has suggested a holistic approach to management of hip osteoarthritis which includes both nonpharmacological and pharmacological treatments. The non-pharmacological treatments range from education ,physical therapy and behavioral changes ,walking aids .The ESCAPE( Enabling Self-Management and Coping of Arthritic Pain Through Exercise) rehabilitation programme for hip and knee osteoarthritis which integrates simple education, self-management and coping strategies, with an exercise regimen has shown to be more cost-effective than usual care. There is a choice of reviewed pharmacological treatments available, but there are few current reviews of possible nonpharmacological methods. This review will focus on the non-pharmacological and non-surgical methods.",
"title": ""
},
{
"docid": "6ec3f783ec49c0b3e51a704bc3bd03ec",
"text": "Abstract: It has been suggested by many supply chain practitioners that in certain cases inventory can have a stimulating effect on the demand. In mathematical terms this amounts to the demand being a function of the inventory level alone. In this work we propose a logistic growth model for the inventory dependent demand rate and solve first the continuous time deterministic optimal control problem of maximising the present value of the total net profit over an infinite horizon. It is shown that under a strict condition there is a unique optimal stock level which the inventory planner should maintain in order to satisfy demand. The stochastic version of the optimal control problem is considered next. A bang-bang type of optimal control problem is formulated and the associated Hamilton-Jacobi-Bellman equation is solved. The inventory level that signifies a switch in the ordering strategy is worked out in the stochastic case.",
"title": ""
},
{
"docid": "74ff09a1d3ca87a0934a1b9095c282c4",
"text": "The cancer metastasis suppressor protein KAI1/CD82 is a member of the tetraspanin superfamily. Recent studies have demonstrated that tetraspanins are palmitoylated and that palmitoylation contributes to the organization of tetraspanin webs or tetraspanin-enriched microdomains. However, the effect of palmitoylation on tetraspanin-mediated cellular functions remains obscure. In this study, we found that tetraspanin KAI1/CD82 was palmitoylated when expressed in PC3 metastatic prostate cancer cells and that palmitoylation involved all of the cytoplasmic cysteine residues proximal to the plasma membrane. Notably, the palmitoylation-deficient KAI1/CD82 mutant largely reversed the wild-type KAI1/CD82's inhibitory effects on migration and invasion of PC3 cells. Also, palmitoylation regulates the subcellular distribution of KAI1/CD82 and its association with other tetraspanins, suggesting that the localized interaction of KAI1/CD82 with tetraspanin webs or tetraspanin-enriched microdomains is important for KAI1/CD82's motility-inhibitory activity. Moreover, we found that KAI1/CD82 palmitoylation affected motility-related subcellular events such as lamellipodia formation and actin cytoskeleton organization and that the alteration of these processes likely contributes to KAI1/CD82's inhibition of motility. Finally, the reversal of cell motility seen in the palmitoylation-deficient KAI1/CD82 mutant correlates with regaining of p130(CAS)-CrkII coupling, a signaling step important for KAI1/CD82's activity. Taken together, our results indicate that palmitoylation is crucial for the functional integrity of tetraspanin KAI1/CD82 during the suppression of cancer cell migration and invasion.",
"title": ""
},
{
"docid": "8e6a83df0235cd6e27fbc14abb61c5fc",
"text": "The management of postprandial hyperglycemia is an important strategy in the control of diabetes mellitus and complications associated with the disease, especially in the diabetes type 2. Therefore, inhibitors of carbohydrate hydrolyzing enzymes can be useful in the treatment of diabetes and medicinal plants can offer an attractive strategy for the purpose. Vaccinium arctostaphylos leaves are considered useful for the treatment of diabetes mellitus in some countries. In our research for antidiabetic compounds from natural sources, we found that the methanol extract of the leaves of V. arctostaphylos displayed a potent inhibitory activity on pancreatic α-amylase activity (IC50 = 0.53 (0.53 - 0.54) mg/mL). The bioassay-guided fractionation of the extract resulted in the isolation of quercetin as an active α-amylase inhibitor. Quercetin showed a dose-dependent inhibitory effect with IC50 value 0.17 (0.16 - 0.17) mM.",
"title": ""
},
{
"docid": "1fcd396aa8a9c28425f519c5662cfb8b",
"text": "This paper presents a novel power flow formulation and an effective solution method for general unbalanced radial distribution systems. Comprehensive models are considered including lines, switches, transformers, shunt capacitors, cogenerators, and several types of loads. A new problem formulation of three-phase distribution power flow equations taking into account the radial structure of the distribution network is presented. A distinguishing feature of the new problem formulation is that it significantly reduces the number of power flow equations, as compared with the conventional formulation. The numerical properties as well as the structural properties of distribution systems are exploited resulting in a fast decoupled solution algorithm. The proposed solution algorithm is evaluated on three-phase unbalanced 292-bus and 394-bus test systems with very promising results.",
"title": ""
},
{
"docid": "9e7d159f53b51bea1a9026add1d49fdb",
"text": "This paper introduces signaling in a standard market microstructure model so as to explore the economic circumstances under which hype and dump manipulation can be an equilibrium outcome. We consider a discrete time, multi-period model with stages of signaling and asset trading. A single informed trader contemplates whether or not to spread a (possibly dishonest) rumor on the asset payoff among uninformed traders. Dishonest rumor-mongering is costly due to regulatory enforcement, and the uninformed traders who access the rumor can besophisticatedor naive. The sophisticated traders correctly anticipate the relationship between the rumor and the asset payoff, whereas the naive ones take the rumor at its face value as if it truthfully reveals the asset payoff. The presence of sophisticated traders puts the informed trader off from rumor-mongering, because sophisticates fully infer the asset payoff from the rumor, reducing the informational rents enjoyed by the informed trader. Nevertheless we show that it can be optimal for an informed trader to create false hype among uninformed traders provided that there is at least one naive trader in the market and the cost of dishonest rumor-mongering is not too low. The false hype allows the informed trader to sell at an inflated price or buy at a deflated one. Intense regulatory enforcement, which makes dishonest rumor-mongering very costly, may not necessarily curb hype and dump schemes. Market depth and trading volume rise with “hype and dump” while market efficiency decreases. JEL Classification: G11, G14.",
"title": ""
},
{
"docid": "2fc02a69913c8f03f328d0148ebb48f3",
"text": "Recent platforms, like Uber and Lyft, offer service to consumers via “self-scheduling” providers who decide for themselves how often to work. These platforms may charge consumers prices and pay providers wages that both adjust based on prevailing demand conditions. For example, Uber uses a “surge pricing” policy, which pays providers a fixed commission of its dynamic price. With a stylized model that yields analytical and numerical results, we study several pricing schemes that could be implemented on a service platform, including surge pricing. We find that the optimal contract substantially increases the platform’s profit relative to contracts that have a fixed price or fixed wage (or both), and although surge pricing is not optimal, it generally achieves nearly the optimal profit. Despite its merits for the platform, surge pricing has been criticized because of concerns for the welfare of providers and consumers. In our model, as labor becomes more expensive, providers and consumers are better off with surge pricing because providers are better utilized and consumers benefit both from lower prices during normal demand and expanded access to service during peak demand. We conclude, in contrast to popular criticism, that all stakeholders can benefit from the use of surge pricing on a platform with self-scheduling capacity.",
"title": ""
},
{
"docid": "c5eb252d17c2bec8ab168ca79ec11321",
"text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18",
"title": ""
},
{
"docid": "836eb904c483cd157807302997dd1aac",
"text": "Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS’ single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS’ capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "7474ffa9e6009ca5ded3d217a8dd2375",
"text": "The cost of error correction has been increasing exponentially with the advancement of software industry. To minimize software errors, it is necessary to extract accurate requirements in the early stage of software development. In the previous study, we extracted the priorities of requirements based on the Use Case Point (UCP), which however revealed the issues inherent to the existing UCP as follows. (i) The UCP failed to specify the structure of use cases or the method of write the use cases, and (ii) the number of transactions determined the use case weight in the UCP. Yet, efforts taken for implementation depend on the types and number of operations performed in each transaction. To address these issues, the present paper proposes an improved UCP and applies it to the prioritization. The proposed method enables more accurate measurement than the existing UCP-based prioritization.",
"title": ""
},
{
"docid": "e7008e964cac54f8580142cc9d3c97c8",
"text": "Regression based methods are not performing as well as detection based methods for human pose estimation. A central problem is that the structural information in the pose is not well exploited in the previous regression methods. In this work, we propose a structure-aware regression approach. It adopts a reparameterized pose representation using bones instead of joints. It exploits the joint connection structure to define a compositional loss function that encodes the long range interactions in the pose. It is simple, effective, and general for both 2D and 3D pose estimation in a unified setting. Comprehensive evaluation validates the effectiveness of our approach. It significantly advances the state-of-the-art on Human3.6M [20] and is competitive with state-of-the-art results on MPII [3].",
"title": ""
}
] |
scidocsrr
|
be84f2dac7dcbbc024dbb22040db8cdf
|
DeepTraffic: Driving Fast through Dense Traffic with Deep Reinforcement Learning
|
[
{
"docid": "9ec7b122117acf691f3bee6105deeb81",
"text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"title": ""
},
{
"docid": "bd3dd79aa5ecb5815b7ca4d461578f20",
"text": "Deep Reinforcement Learning (RL) recently emerged as one of the most competitive approaches for learning in sequential decision making problems with fully observable environments, e.g., computer Go. However, very little work has been done in deep RL to handle partially observable environments. We propose a new architecture called Action-specific Deep Recurrent QNetwork (ADRQN) to enhance learning performance in partially observable domains. Actions are encoded by a fully connected layer and coupled with a convolutional observation to form an action-observation pair. The time series of actionobservation pairs are then integrated by an LSTM layer that learns latent states based on which a fully connected layer computes Q-values as in conventional Deep Q-Networks (DQNs). We demonstrate the effectiveness of our new architecture in several partially observable domains, including flickering Atari games.",
"title": ""
}
] |
[
{
"docid": "abaf590dfff79cd3282b36db369c8a32",
"text": "Classifying a visual concept merely from its associated online textual source, such as a Wikipedia article, is an attractive research topic in zero-shot learning because it alleviates the burden of manually collecting semantic attributes. Recent work has pursued this approach by exploring various ways of connecting the visual and text domains. In this paper, we revisit this idea by going further to consider one important factor: the textual representation is usually too noisy for the zero-shot learning application. This observation motivates us to design a simple yet effective zero-shot learning method that is capable of suppressing noise in the text. Specifically, we propose an l2,1-norm based objective function which can simultaneously suppress the noisy signal in the text and learn a function to match the text document and visual features. We also develop an optimization algorithm to efficiently solve the resulting problem. By conducting experiments on two large datasets, we demonstrate that the proposed method significantly outperforms those competing methods which rely on online information sources but with no explicit noise suppression. Furthermore, we make an in-depth analysis of the proposed method and provide insight as to what kind of information in documents is useful for zero-shot learning.",
"title": ""
},
{
"docid": "08dbe11a42f7018966c9ca2db5c1fa98",
"text": "Person re-identification has important applications in video surveillance. It is particularly challenging because observed pedestrians undergo significant variations across camera views, and there are a large number of pedestrians to be distinguished given small pedestrian images from surveillance videos. This chapter discusses different approaches of improving the key components of a person reidentification system, including feature design, feature learning and metric learning, as well as their strength and weakness. It provides an overview of various person reidentification systems and their evaluation on benchmark datasets. Mutliple benchmark datasets for person re-identification are summarized and discussed. The performance of some state-of-the-art person identification approaches on benchmark datasets is compared and analyzed. It also discusses a few future research directions on improving benchmark datasets, evaluation methodology and system desgin.",
"title": ""
},
{
"docid": "2c9138a706f316a10104f2da9a054e44",
"text": "Research on face spoofing detection has mainly been focused on analyzing the luminance of the face images, hence discarding the chrominance information which can be useful for discriminating fake faces from genuine ones. In this work, we propose a new face anti-spoofing method based on color texture analysis. We analyze the joint color-texture information from the luminance and the chrominance channels using a color local binary pattern descriptor. More specifically, the feature histograms are extracted from each image band separately. Extensive experiments on two benchmark datasets, namely CASIA face anti-spoofing and Replay-Attack databases, showed excellent results compared to the state-of-the-art. Most importantly, our inter-database evaluation depicts that the proposed approach showed very promising generalization capabilities.",
"title": ""
},
{
"docid": "1de1324d0f10a0e58c2adccdd8cb2c21",
"text": "In keyword search advertising, many advertisers operate on a limited budget. Yet how limited budgets affect keyword search advertising has not been extensively studied. This paper offers an analysis of the generalized second-price auction with budget constraints. We find that the budget constraint may induce advertisers to raise their bids to the highest possible amount for two different motivations: to accelerate the elimination of the budget-constrained competitor as well as to reduce their own advertising cost. Thus, in contrast to the current literature, our analysis shows that both budget-constrained and unconstrained advertisers could bid more than their own valuation. We further extend the model to consider dynamic bidding and budget-setting decisions.",
"title": ""
},
{
"docid": "24c1b31bac3688c901c9b56ef9a331da",
"text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.",
"title": ""
},
{
"docid": "c88370dfcf79534c019fd797f055f393",
"text": "Mobile Online Social Networks (mOSNs) have recently grown in popularity. With the ubiquitous use of mobile devices and a rapid shift of technology and access to OSNs, it is important to examine the impact of mobile OSNs from a privacy standpoint. We present a taxonomy of ways to study privacy leakage and report on the current status of known leakages. We find that all mOSNs in our study exhibit some leakage of private information to third parties. Novel concerns include combination of new features unique to mobile access with the leakage in OSNs that we had examined earlier.",
"title": ""
},
{
"docid": "556a7bd39da4d352642ea3c556a3cebf",
"text": "Merger and Acquisition (M&A) has been a critical practice about corporate restructuring. Previous studies are mostly devoted to evaluating the suitability of M&A between a pair of investor and target company, or a target company for its propensity of being acquired. This paper focuses on the dual problem of predicting an investor’s prospective M&A based on its activities and firmographics. We propose to use a mutually-exciting point process with a regression prior to quantify the investor’s M&A behavior. Our model is motivated by the so-called contagious ‘wave-like’ M&A phenomenon, which has been well-recognized by the economics and management communities. A tailored model learning algorithm is devised that incorporates both static profile covariates and past M&A activities. Results on CrunchBase suggest the superiority of our model. The collected dataset and code will be released together with the paper.",
"title": ""
},
{
"docid": "0e91b49f051f960d8f4c7786f2bdc257",
"text": "The importance of measuring the performance of e-government cannot be overemphasized. In this paper, a flexible framework is suggested to choose an appropriate strategy to measure the tangible and intangible benefits of e-government. An Indian case study of NDMC (New Delhi Municipal Corporation) has been taken up for analysis and placement into the framework. The results obtained suggest that to have a proper evaluation of tangible and intangible benefits of e-government, the projects should be in a mature stage with proper information systems in place. All of the e-government projects in India are still in a nascent stage; hence, proper information flow for calculating 'return on e-government' considering tangible and intangible benefits cannot be fully ascertained.",
"title": ""
},
{
"docid": "0bfba7797a0e7dcd4817c10d4df350db",
"text": "Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.",
"title": ""
},
{
"docid": "c3b4bcf57473321dc401ac583438b3a3",
"text": "Face recognition from RGB-D images utilizes 2 complementary types of image data, i.e. colour and depth images, to achieve more accurate recognition. In this paper, we propose a face recognition system based on deep learning, which can be used to verify and identify a subject from the colour and depth face images captured with a consumer-level RGB-D camera. To recognize faces with colour and depth information, our system contains 3 parts: depth image recovery, deep learning for feature extraction, and joint classification. To alleviate the problem of the limited size of available RGB-D data for deep learning, our deep network is firstly trained with colour face dataset, and later fine-tuned on depth face images for transfer learning. Our experiments on some public and our own RGB-D face datasets show that the proposed face recognition system provides very accurate face recognition results and it is robust against variations in head rotation and environmental illumination.",
"title": ""
},
{
"docid": "91a0a0ceb3f4774efd992816ed84ef73",
"text": "The bias in the news media is an inherent flaw of the news production process. The resulting bias often causes a sharp increase in political polarization and in the cost of conflict on social issues such as Iraq war. It is very difficult, if not impossible, for readers to have penetrating views on realities against such bias. This paper presents NewsCube, a novel Internet news service aiming at mitigating the effect of media bias. NewsCube automatically creates and promptly provides readers with multiple classified viewpoints on a news event of interest. As such, it effectively helps readers understand a fact from a plural of viewpoints and formulate their own, more balanced viewpoints. While media bias problem has been studied extensively in communications and social sciences, our work is the first to develop a news service as a solution and study its effect. We discuss the effect of the service through various user studies.",
"title": ""
},
{
"docid": "9e439c83f4c29b870b1716ceae5aa1f3",
"text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller",
"title": ""
},
{
"docid": "2427019698358950791ee46506a28e7b",
"text": "This article describes a novel way of combining data mining techniques on Internet data in order to discover actionable marketing intelligence in electronic commerce scenarios. The data that is considered not only covers various types of server and web meta information, but also marketing data and knowledge. Furthermore, heterogeneity resolution thereof and Internet- and electronic commerce-specific pre-processing activities are embedded. A generic web log data hypercube is formally defined and schematic designs for analytical and predictive activities are given. From these materialised views, various online analytical web usage data mining techniques are shown, which include marketing expertise as domain knowledge and are specifically designed for electronic commerce purposes.",
"title": ""
},
{
"docid": "0dfd5345c2dc3fe047dcc635760ffedd",
"text": "This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.",
"title": ""
},
{
"docid": "06ba0cd00209a7f4f200395b1662003e",
"text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.",
"title": ""
},
{
"docid": "4420990e60ca5a043d1353019c947db5",
"text": "Recent work has shown that the end-to-end approach using convolutional neural network (CNN) is effective in various types of machine learning tasks. For audio signals, the approach takes raw waveforms as input using an 1-D convolution layer. In this paper, we improve the 1-D CNN architecture for music auto-tagging by adopting building blocks from state-of-the-art image classification models, ResNets and SENets, and adding multi-level feature aggregation to it. We compare different combinations of the modules in building CNN architectures. The results show that they achieve significant improvements over previous state-of-the-art models on the MagnaTagATune dataset and comparable results on Million Song Dataset. Furthermore, we analyze and visualize our model to show how the 1-D CNN operates.",
"title": ""
},
{
"docid": "8b7896d075e6c530123a9948f97b69bc",
"text": "With the advances in Virtual Reality (VR) and physiological sensing technology, even more immersive computer-mediated communication through life-like characteristics is now possible. In response to the current lack of culture, expression and emotions in VR avatars, we propose a two-fold solution. First, integration of bio-signal sensors into the HMD and techniques to detect aspects of the emotional state of the user. Second, the use of this data to generate expressive avatars which we refer to as Emotional Beasts. The creation of Emotional Beasts will allow us to experiment with the manipulation of a user's self-expression in VR space and as well as the perception of others in it, providing some valuable tools to evoke a desired emotional reaction. As this medium moves forward, this and other tools are what will help the field of virtual reality expand from a medium of surface-level experience to one of deep, emotionally compelling human-to-human connection.",
"title": ""
},
{
"docid": "4d6e9bc0a8c55e65d070d1776e781173",
"text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...",
"title": ""
},
{
"docid": "b8584fa2dc6882a3d9670f387ea8185c",
"text": "Measures aimed to improve the diversity of images and image features in evolutionary art help to direct search toward more novel and creative parts of the artistic search domain. To date such measures have not focused on selecting from all individuals based on their contribution to diversity of feature metrics. In recent work on TSP problem instance classification, selection based on a direct measure of each individual's contribution to diversity was successfully used to generate hard and easy TSP instances. In this work we use this search framework to evolve diverse variants of a source image in one and two feature dimensions. The resulting images show the spectrum of effects from transforming images to score across the range of each feature. The results also reveal interesting correlations between feature values in two dimensions.",
"title": ""
},
{
"docid": "1056082cbafcc1284d33c6bd97c67ad4",
"text": "# Display the current working directory getwd(); # If necessary, change the path below to the directory where the data files are stored. # \".\" means current directory. On Windows use a forward slash / instead of the usual \\. workingDir = \".\"; setwd(workingDir); # Load WGCNA package library(WGCNA) library(cluster) # The following setting is important, do not omit. options(stringsAsFactors = FALSE); # Load the previously saved data load(\"Simulated-RelatingToExt.RData\"); load(\"Simulated-Screening.RData\")",
"title": ""
}
] |
scidocsrr
|
7278290f7df11ca49c342e975082759e
|
A Dependency Parser for Tweets
|
[
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
}
] |
[
{
"docid": "c0ddc4b83145a1ee7b252d65066b8969",
"text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.",
"title": ""
},
{
"docid": "a7d5ba182deefef418e03725f664d68e",
"text": "Network stacks currently implemented in operating systems can no longer cope with the packet rates offered by 10 Gbit Ethernet. Thus, frameworks were developed claiming to offer a faster alternative for this demand. These frameworks enable arbitrary packet processing systems to be built from commodity hardware handling a traffic rate of several 10 Gbit interfaces, entering a domain previously only available to custom-built hardware. In this paper, we survey various frameworks for high-performance packet IO. We analyze the performance of the most prominent frameworks based on representative measurements in packet forwarding scenarios. Therefore, we quantify the effects of caching and look at the tradeoff between throughput and latency. Moreover, we introduce a model to estimate and assess the performance of these packet processing frameworks.",
"title": ""
},
{
"docid": "2f0ad3cc279dfb4a10f4fbad1b2f1186",
"text": "OBJECTIVE\nTo assess the feasibility and robustness of an asynchronous and non-invasive EEG-based Brain-Computer Interface (BCI) for continuous mental control of a wheelchair.\n\n\nMETHODS\nIn experiment 1 two subjects were asked to mentally drive both a real and a simulated wheelchair from a starting point to a goal along a pre-specified path. Here we only report experiments with the simulated wheelchair for which we have extensive data in a complex environment that allows a sound analysis. Each subject participated in five experimental sessions, each consisting of 10 trials. The time elapsed between two consecutive experimental sessions was variable (from 1h to 2months) to assess the system robustness over time. The pre-specified path was divided into seven stretches to assess the system robustness in different contexts. To further assess the performance of the brain-actuated wheelchair, subject 1 participated in a second experiment consisting of 10 trials where he was asked to drive the simulated wheelchair following 10 different complex and random paths never tried before.\n\n\nRESULTS\nIn experiment 1 the two subjects were able to reach 100% (subject 1) and 80% (subject 2) of the final goals along the pre-specified trajectory in their best sessions. Different performances were obtained over time and path stretches, what indicates that performance is time and context dependent. In experiment 2, subject 1 was able to reach the final goal in 80% of the trials.\n\n\nCONCLUSIONS\nThe results show that subjects can rapidly master our asynchronous EEG-based BCI to control a wheelchair. Also, they can autonomously operate the BCI over long periods of time without the need for adaptive algorithms externally tuned by a human operator to minimize the impact of EEG non-stationarities. This is possible because of two key components: first, the inclusion of a shared control system between the BCI system and the intelligent simulated wheelchair; second, the selection of stable user-specific EEG features that maximize the separability between the mental tasks.\n\n\nSIGNIFICANCE\nThese results show the feasibility of continuously controlling complex robotics devices using an asynchronous and non-invasive BCI.",
"title": ""
},
{
"docid": "91e32e80a6a2f2a504776b9fd86425ca",
"text": "We propose a method for semi-supervised semantic segmentation using an adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. In addition, the fully convolutional discriminator enables semi-supervised learning through discovering the trustworthy regions in predicted results of unlabeled images, thereby providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images to enhance the segmentation model. Experimental results on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "c1a5eb1b5efa91cf701d24e890e20ae0",
"text": "Rancimat induction time of palm oil (PO), several extra virgin olive oils (EV) and their binary blends have been determined at three different temperatures (120, 130 and 140°C). Analytical composition and oxidation stability of PO/EV blends were found to be a linear combination of the oil partners. Induction time of pure PO was always higher than those of EV oils and blends, in which induction time increased proportionally with the percentage of PO. However, induction time of 80% PO blend was similar to that of pure PO. Fatty acid composition appeared to be the most important factor affecting heat-oxidation stability and a saturated/unsaturated ratio near 1 was the optimally stable composition. Conversely, total phenols had a zero or negative role on the oxidative stability of the blends. Finally, in heat-oxidised oils significant losses of polyunsaturated fatty acids and formation of short-chain fatty acids were recorded.",
"title": ""
},
{
"docid": "d5d85cddf50e64d602223308f448da37",
"text": "Congenital adrenal hyperplasia (CAH) is the commonest cause of ambiguous genitalia for female newborns and is one of the conditions under the umbrella term of \"Disorders of Sex Development\" (DSD). Management of these patients require multidisciplinary collaboration and is challenging because there are many aspects of care, such as the most appropriate timing and extent of feminizing surgery required and attention to psychosexual, psychological, and reproductive issues, which still require attention and reconsideration, even in developed nations. In developing nations, however, additional challenges prevail: poverty, lack of education, lack of easily accessible and affordable medical care, traditional beliefs on intersex, religious, and cultural issues, as well as poor community support. There is a paucity of long-term outcome studies on DSD and CAH to inform on best management to achieve optimal outcome. In a survey conducted on 16 patients with CAH and their parents in a Malaysian tertiary center, 31.3% of patients stated poor knowledge of their condition, and 37.5% did not realize that their medications were required for life. This review on the research done on quality of life (QOL) of female patients with CAH aims: to discuss factors affecting QOL of female patients with CAH, especially in the developing population; to summarize the extant literature on the quality of life outcomes of female patients with CAH; and to offer recommendations to improve QOL outcomes in clinical practice and research.",
"title": ""
},
{
"docid": "d558db90f72342eae413ed7937e9120f",
"text": "Latent Dirichlet Allocation (LDA) models trained without stopword removal often produce topics with high posterior probabilities on uninformative words, obscuring the underlying corpus content. Even when canonical stopwords are manually removed, uninformative words common in that corpus will still dominate the most probable words in a topic. In this work, we first show how the standard topic quality measures of coherence and pointwise mutual information act counter-intuitively in the presence of common but irrelevant words, making it difficult to even quantitatively identify situations in which topics may be dominated by stopwords. We propose an additional topic quality metric that targets the stopword problem, and show that it, unlike the standard measures, correctly correlates with human judgements of quality. We also propose a simple-to-implement strategy for generating topics that are evaluated to be of much higher quality by both human assessment and our new metric. This approach, a collection of informative priors easily introduced into most LDA-style inference methods, automatically promotes terms with domain relevance and demotes domain-specific stop words. We demonstrate this approach’s effectiveness in three very different domains: Department of Labor accident reports, online health forum posts, and NIPS abstracts. Overall we find that current practices thought to solve this problem do not do so adequately, and that our proposal offers a substantial improvement for those interested in interpreting their topics as objects in their own right.",
"title": ""
},
{
"docid": "0af93a361cc354735e77bf1d89b4c089",
"text": "To be exploited for driving assistance purpose, a road obstacle detection system must have a good detection rate and an extremely low false detection rate. Moreover, the field of possible applications depends on the detection range of the system. With these ideas in mind, we propose in this paper a long range generic road obstacle detection system based on fusion between stereovision and laser scanner. The obstacles are detected and tracked by the laser sensor. Afterwards, stereovision is used to confirm the detections. An overview of the whole method is given. Then the confirmation process is detailed: three algorithms are proposed and compared on real road situations",
"title": ""
},
{
"docid": "e5ec3cf10b6664642db6a27d7c76987c",
"text": "We present a protocol for payments across payment systems. It enables secure transfers between ledgers and allows anyone with accounts on two ledgers to create a connection between them. Ledger-provided escrow removes the need to trust these connectors. Connections can be composed to enable payments between any ledgers, creating a global graph of liquidity or Interledger. Unlike previous approaches, this protocol requires no global coordinating system or blockchain. Transfers are escrowed in series from the sender to the recipient and executed using one of two modes. In the Atomic mode, transfers are coordinated using an ad-hoc group of notaries selected by the participants. In the Universal mode, there is no external coordination. Instead, bounded execution windows, participant incentives and a “reverse” execution order enable secure payments between parties without shared trust in any system or institution.",
"title": ""
},
{
"docid": "d485f9e1232148d80c3f561026323d52",
"text": "Response surface methodology (RSM) is a collection of mathematical and statistical techniques for empirical model building. By careful design of experiments, the objective is to optimize a response (output variable) which is influenced by several independent variables (input variables). An experiment is a series of tests, called runs, in which changes are made in the input variables in order to identify the reasons for changes in the output response. Originally, RSM was developed to model experimental responses (Box and Draper, 1987), and then migrated into the modelling of numerical experiments. The difference is in the type of error generated by the response. In physical experiments, inaccuracy can be due, for example, to measurement errors while, in computer experiments, numerical noise is a result of incomplete convergence of iterative processes, round-off errors or the discrete representation of continuous physical RSM, the errors are assumed to be random.",
"title": ""
},
{
"docid": "f4d040ba9ee379111c572ea96807eeb5",
"text": "In this paper, a systematic design technique for quadruple-ridged flared horn antennas is presented, to enhance the radiation properties through the profiling of the ridge taper. The technique relies on control of the cutoff frequencies of specific modes inside the horn, instead of brute-force optimization. This is used to design a prototype antenna as a feed for an offset Gregorian reflector system, such as considered for the Square Kilometer Array (SKA) radio telescope, to achieve an optimized aperture efficiency from 2 to 12 GHz. The antenna is employed with a quadraxial feeding network that allows the excitation of the fundamental TE11 mode, while suppressing all other modes that causes phase errors in the aperture. Measured results confirm the validity of this approach, where good agreement is found with the simulated results.",
"title": ""
},
{
"docid": "5d5a103852019f1de8455e4d13c0e82a",
"text": "INTRODUCTION The cryptocurrency market has evolved erratically and at unprecedented speed over the course of its short lifespan. Since the release of the pioneer anarchic cryptocurrency, Bitcoin, to the public in January 2009, more than 550 cryptocurrencies have been developed, the majority with only a modicum of success [1]. Research on the industry is still scarce. The majority of it is singularly focused on Bitcoin rather than a more diverse spread of cryptocurrencies and is steadily being outpaced by fluid industry developments, including new coins, technological progression, and increasing government regulation of the markets. Though the fluidity of the industry does, admittedly, present a challenge to research, a thorough evaluation of the cryptocurrency industry writ large is necessary. This paper seeks to provide a concise yet comprehensive analysis of the cryptocurrency industry with particular analysis of Bitcoin, the first decentralized cryptocurrency. Particular attention will be given to examining theoretical economic differences between existing coins. Section 1 of this paper provides an overview of the industry. Section 1.1 provides a brief history of digital currencies, which segues into a discussion of Bitcoin in section 1.2. Section 2 of this paper provides an in-depth analysis of coin economics, partitioning the major currencies by their network security protocol mechanisms, and discussing the long-term theoretical implications that these classes entail. Section 2.1 will discuss network security protocol. The mechanisms will be discussed in the order that follows. Section 2.2 will discuss the proof-of-work (PoW) mechanism used in the Bitcoin protocol and various altcoins. Section 2.3 will discuss the proof-of-stake (PoS) protocol scheme first introduced by Peercoin in 2011, which relies on a less energy intensive security mechanism than PoW. Section 2.4 will discuss a hybrid PoW/PoS mechanism. Section 2.5 will discuss the Byzantine Consensus mechanism. Section 2.6 presents the results of a systematic review of 21 cryptocurrencies. Section 3 provides an overview of factors affecting industry growth, focusing heavily on the regulatory environment in section 3.1. Section 3.2 discusses public perception and acceptance of cryptocurrency as a payment system in the current retail environment. Section 4 concludes the analysis. A note on sources: Because the cryptocurrency industry is still young and factors that impact it are changing on a daily basis, few comprehensive or fully updated academic sources exist on the topic. While academic work was of course consulted for this project, the majority of the information that informs this paper was derived from …",
"title": ""
},
{
"docid": "d9d0edec2ad5ac8120fb8626f208af6c",
"text": "Light-Field enables us to observe scenes from free viewpoints. However, it generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression. 4-D Light-Field is very redundant because essentially it includes just 3-D scene information. Actually, although robust 3-D scene estimation such as depth recovery from Light-Field is not so easy, we successfully derived a method of reconstructing Light-Field directly from 3-D information composed of multi-focus images without any scene estimation. On the other hand, it is easy to synthesize multi-focus images from Light-Field. In this paper, based on the method, we propose novel Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes. Multi-focus images are easily compressed because they contain mostly low frequency components. We show experimental results by using synthetic and real images. Reconstruction quality of the method is robust even at very low bit-rate.",
"title": ""
},
{
"docid": "2ca5118d8f4402ed1a2d1c26fbcf9f53",
"text": "Weakly supervised data is an important machine learning data to help improve learning performance. However, recent results indicate that machine learning techniques with the usage of weakly supervised data may sometimes cause performance degradation. Safely leveraging weakly supervised data is important, whereas there is only very limited effort, especially on a general formulation to help provide insight to guide safe weakly supervised learning. In this paper we present a scheme that builds the final prediction results by integrating several weakly supervised learners. Our resultant formulation brings two advantages. i) For the commonly used convex loss functions in both regression and classification tasks, safeness guarantees exist under a mild condition; ii) Prior knowledge related to the weights of base learners can be embedded in a flexible manner. Moreover, the formulation can be addressed globally by simple convex quadratic or linear program efficiently. Experiments on multiple weakly supervised learning tasks such as label noise learning, domain adaptation and semi-supervised learning validate the effectiveness.",
"title": ""
},
{
"docid": "dc93f9d515a4ce640bf3913ae9b6bce1",
"text": "Paste the appropriate copyright statement here. ACM now supports three different copyright statements: • ACM copyright: ACM holds the copyright on the work. This is the historical approach. • License: The author(s) retain copyright, but ACM receives an exclusive publication license. • Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM. This text field is large enough to hold the appropriate release statement assuming it is single spaced in a sans-serif 7 point font. Every submission will be assigned their own unique DOI string to be included here. Abstract This position statement presents a notional framework for more tightly integrating interactive visual systems with machine learning. We posit that increasingly, powerful systems will be built for data analysis and consumer use that leverage the best of both human insight and raw computing power by effectively integrating machine learning and human interaction. We note some existing contributions to this space and provide a framework that organizes existing efforts and illuminates future endeavors by suggesting the categories of machine learning algorithm and interaction type that are most germane to this integration.",
"title": ""
},
{
"docid": "efcf84406a2218deeb4ca33cb8574172",
"text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.",
"title": ""
},
{
"docid": "3a37bf4ffad533746d2335f2c442a6d6",
"text": "Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets.",
"title": ""
},
{
"docid": "fd0318e6a6ea3dbf422235b7008c3006",
"text": "Multiple myeloma (MM), a cancer of terminally differentiated plasma cells, is the second most common hematological malignancy. The disease is characterized by the accumulation of abnormal plasma cells in the bone marrow that remains in close association with other cells in the marrow microenvironment. In addition to the genomic alterations that commonly occur in MM, the interaction with cells in the marrow microenvironment promotes signaling events within the myeloma cells that enhances survival of MM cells. The phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT)/mammalian target of rapamycin (mTOR) is such a pathway that is aberrantly activated in a large proportion of MM patients through numerous mechanisms and can play a role in resistance to several existing therapies making this a central pathway in MM pathophysiology. Here, we review the pathway, its role in MM, promising preclinical results obtained thus far and the clinical promise that drugs targeting this pathway have in MM.",
"title": ""
},
{
"docid": "8e13f75cd72aff7f7916452ff980c14f",
"text": "The software running on electronic devices is regularly updated, these days. A vehicle consists of many such devices, but is operated in a completely different manner than consumer devices. Update operations are safety critical in the automotive domain. Thus, they demand for a very well secured process. We propose an on-board security architecture which facilitates such update processes by combining hardware and software modules. In this paper, we present a protocol to show how this security architecture is employed in order to achieve secure firmware updates for automotive control units.",
"title": ""
},
{
"docid": "2cff00acdccfc43ed2bc35efe704f1ac",
"text": "A decision to invest in new manufacturing enabling technologies supporting computer integrated manufacturing (CIM) must include non-quantifiable, intangible benefits to the organization in meeting its strategic goals. Therefore, use of tactical level, purely economic, evaluation methods normally result in the rejection of strategically vital automation proposals. This paper includes four different fuzzy multi-attribute group decision-making methods. The first one is a fuzzy model of group decision proposed by Blin. The second is fuzzy synthetic evaluation, the third is Yager’s weighted goals method, and the last one is fuzzy analytic hierarchy process. These methods are extended to select the best computer integrated manufacturing system by taking into account both intangible and tangible factors. A computer software for these approaches is developed and finally some numerical applications of these methods are given to compare the results of all methods. # 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
fcd9cb0397e7f5102d1dbbba2020bcce
|
Embeddings as a Means of Domain Adaptation for the Machine Translation
|
[
{
"docid": "0bbfe548b8c2def9c5d9d02cc2ebe159",
"text": "In this paper, we propose a novel domain adaptation method named “ mixed fine tuning” for neural machine translation (NMT). We combine two existing approaches namely fine tuningandmulti domainNMT. We first train an NMT model on an out-of-domain parallel corpus, and then fine tune it on a parallel corpus which is a mix of the in-domain and out-ofdomain corpora. All corpora are augmented with artificial tags to indicate specific domains. We empirically compare our proposed method against fine tuning and multi domain methods and discuss its benefits and shortcomings.",
"title": ""
},
{
"docid": "cb9d35d577afc17afcca66c16ea2f554",
"text": "In this paper, we propose a new domain adaptation technique for neural machine translation called cost weighting, which is appropriate for adaptation scenarios in which a small in-domain data set and a large general-domain data set are available. Cost weighting incorporates a domain classifier into the neural machine translation training algorithm, using features derived from the encoder representation in order to distinguish in-domain from out-of-domain data. Classifier probabilities are used to weight sentences according to their domain similarity when updating the parameters of the neural translation model. We compare cost weighting to two traditional domain adaptation techniques developed for statistical machine translation: data selection and sub-corpus weighting. Experiments on two large-data tasks show that both the traditional techniques and our novel proposal lead to significant gains, with cost weighting outperforming the traditional methods.",
"title": ""
}
] |
[
{
"docid": "2486eaddb8b00eabcc32ea4588a9d189",
"text": "Ontology design patterns have been pointed out as a promising approach for ontology engineering. The goal of this paper is twofold. Firstly, based on well-established works in Software Engineering, we revisit the notion of ontology patterns in Ontology Engineering to introduce the notion of ontology pattern language as a way to organize related ontology patterns. Secondly, we present an overview of a software process ontology pattern language.",
"title": ""
},
{
"docid": "80394c124d823e7639af06fd33ef99c1",
"text": "We investigate whether income inequality affects subsequent growth in a cross-country sample for 1965-90, using the models of Barro (1997), Bleaney and Nishiyama (2002) and Sachs and Warner (1997), with negative results. We then investigate the evolution of income inequality over the same period and its correlation with growth. The dominating feature is inequality convergence across countries. This convergence has been significantly faster amongst developed countries. Growth does not appear to influence the evolution of inequality over time. Outline",
"title": ""
},
{
"docid": "bf71f7f57def7633a5390b572e983bc9",
"text": "With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.",
"title": ""
},
{
"docid": "6c1138ec8f490f824e34d15c13593007",
"text": "We present a DSP simulation environment that will enable students to perform laboratory exercises using Android mobile devices and tablets. Due to the pervasive nature of the mobile technology, education applications designed for mobile devices have the potential to stimulate student interest in addition to offering convenient access and interaction capabilities. This paper describes a portable signal processing laboratory for the Android platform. This software is intended to be an educational tool for students and instructors in DSP, and signals and systems courses. The development of Android JDSP (A-JDSP) is carried out using the Android SDK, which is a Java-based open source development platform. The proposed application contains basic DSP functions for convolution, sampling, FFT, filtering and frequency domain analysis, with a convenient graphical user interface. A description of the architecture, functions and planned assessments are presented in this paper. Introduction Mobile technologies have grown rapidly in recent years and play a significant role in modern day computing. The pervasiveness of mobile devices opens up new avenues for developing applications in education, entertainment and personal communications. Understanding the effectiveness of smartphones and tablets in classroom instruction have been a subject of considerable research in recent years. The advantages of handheld devices over personal computers in K-12 education have been investigated 1 . The study has found that the easy accessibility and maneuverability of handheld devices lead to an increase in student interest. By incorporating mobile technologies into mathematics and applied mathematics courses, it has been shown that smartphones can broaden the scope and effectiveness of technical education in classrooms 2 . Fig 1: Splash screen of the AJDSP Android application Designing interactive applications to complement traditional teaching methods in STEM education has also been of considerable interest. The role of interactive learning in knowledge dissemination and acquisition has been discussed and it has been found to assist in the development of cognitive skills 3 . It has been showed learning potential is enhanced when education tools that possess a higher degree of interactivity are employed 4 . Software applications that incorporate visual components in learning, in order to simplify the understanding of complex theoretical concepts, have been also been developed 5-9 . These applications are generally characterized by rich user interaction and ease of accessibility. Modern mobile phones and tablets possess abundant memory and powerful processors, in addition to providing highly interactive interfaces. These features enable the design of applications that require intensive calculations to be supported on mobile devices. In particular, Android operating system based smartphones and tablets have large user base and sophisticated hardware configurations. Though several applications catering to elementary school education have been developed for Android devices, not much effort has been undertaken towards building DSP simulation applications 10 . In this paper, we propose a mobile based application that will enable students to perform Digital Signal Processing laboratories on their smartphone devices (Figure 1). In order to enable students to perform DSP labs over the Internet, the authors developed J-DSP, a visual programming environment 11-12 . J-DSP was designed as a zero footprint, standalone Java applet that can run directly on a browser. Several interactive laboratories have been developed and assessed in undergraduate courses. In addition to containing basic signal processing functions such as sampling, convolution, digital filter design and spectral analysis, J-DSP is also supported by several toolboxes. An iOS version of the software has also been developed and presented 13-15 . Here, we describe an Android based graphical application, A-JDSP, for signal processing simulation. The proposed tool has the potential to enhance DSP education by supporting both educators and students alike to teach and learn digital signal processing. The rest of the paper is organized as follows. We review related work in Section 2 and present the architecture of the proposed application in Section 3. In Section 4 we describe some of the functionalities of the software. We describe planned assessment strategies for the proposed application in Section 5. The concluding remarks and possible directions of extending this work are discussed in Section 6. Related Work Commercial packages such as MATLAB 16 and LabVIEW 17 are commonly used in signal processing research and application development. J-DSP, a web-based graphical DSP simulation package, was proposed as a non-commercial alternative for performing laboratories in undergraduate courses 3 . Though J-DSP is a light-weight application, running J-DSP over the web on mobile devices can be data-intensive. Hence, executing simulations directly on the mobile device is a suitable alternative. A mobile application that supports functions pertinent to different areas in electrical engineering, such as circuit theory, control systems and DSP has been reported 18 . However, it does not contain a comprehensive set of functions to simulate several DSP systems. In addition to this, a mobile interface for the MATLAB package has been released 19 . However, this requires an active version of MATLAB on a remote machine and a high speed internet connection to access the remote machine from the mobile device. In order to circumvent these problems, i-JDSP, an iOS version of the J-DSP software was proposed 13-15 . It implements DSP functions and algorithms optimized for mobile devices, thereby removing the need for internet connectivity. Our work builds upon J-DSP 11-12 and the iOS version of J-DSP 13-15 , and proposes to build an application for the Android operating system. Presently, to the best of our knowledge, there are no freely available Android applications that focus on signal processing education. Architecture The proposed application is implemented using Android-SDK 22 , which is a Java based development framework. The user interfaces are implemented using XML as it is well suited for Android development. The architecture of the proposed system is illustrated in Figure 2. It has five main components: (i) User Interfaces, (ii) Part Object, (iii) Part Calculator, (iv) Part View, and (v) Parts Controller. The role of each of them is described below in detail. The blocks in A-JDSP can be accessed through a function palette (user interface) and each block is associated with a view using which the function properties can be modified. The user interfaces obtain the user input data and pass them to the Part Object. Furthermore, every block has a separate Calculator function to perform the mathematical and signal processing algorithms. The Part Calculator uses the data from the input pins of the block, implements the relevant algorithms and updates the output pins. Figure 2. Architecture of AJDSP. Parts Controller Part Calculator Part Object User Interface Part View All the configuration information, such as the pin specifications, the part name and location of the block is contained in the Part Object class. In addition, the Part Object can access the data from each of the input pins of the block. When the user adds a particular block in the simulation, an instance of the Part Object class is created and is stored by a list object in the Parts Controller. The Parts Controller is an interface between the Part Object and the Part View. One of the main functions of Parts Controller is supervising block creation. The process of block creation by the Parts Controller can be described as follows: The block is configured by the user through the user interface and the block data is passed to an instance of the Part Object class. The Part Object then sends the block configuration information through the Parts Controller to the Part View, which finally renders the block. The Part View is the main graphical interface of the application. This displays the blocks and connections on the screen. It contains functionalities for selecting, moving and deleting blocks. Examples of block diagrams in the A-JDSP application for different simulations are illustrated in Figure 3(a), Figure 4(a) and Figure 5(a) respectively. Functionalities In this section, we describe some of the DSP functionalities that have been developed as part of A-JDSP. Android based Signal Generator block This generates the various input signals necessary for A-JDSP simulations. In addition to deterministic signals such as square, triangular and sinusoids; random signals from Gaussian Rayleigh and Uniform distributions can be generated. The signal related parameters such as signal frequency, time shift, mean and variance can be set through the user interface.",
"title": ""
},
{
"docid": "693e935d405b255ac86b8a9f5e7852a3",
"text": "Recent developments have demonstrated the capacity of rest rict d Boltzmann machines (RBM) to be powerful generative models, able to extract useful featu r s from input data or construct deep artificial neural networks. In such settings, the RBM only yields a preprocessing or an initialization for some other model, instead of acting as a complete supervised model in its own right. In this paper, we argue that RBMs can provide a self-contained framework fo r developing competitive classifiers. We study the Classification RBM (ClassRBM), a variant on the R BM adapted to the classification setting. We study different strategies for training the Cla ssRBM and show that competitive classification performances can be reached when appropriately com bining discriminative and generative training objectives. Since training according to the gener ative objective requires the computation of a generally intractable gradient, we also compare differen t approaches to estimating this gradient and address the issue of obtaining such a gradient for proble ms with very high dimensional inputs. Finally, we describe how to adapt the ClassRBM to two special cases of classification problems, namely semi-supervised and multitask learning.",
"title": ""
},
{
"docid": "a33348ee1396be9be333eb3be8dadb39",
"text": "In the multi-MHz low voltage, high current applications, Synchronous Rectification (SR) is strongly needed due to the forward recovery and the high conduction loss of the rectifier diodes. This paper applies the SR technique to a 10-MHz isolated class-Φ2 resonant converter and proposes a self-driven level-shifted Resonant Gate Driver (RGD) for the SR FET. The proposed RGD can reduce the average on-state resistance and the associated conduction loss of the MOSFET. It also provides precise switching timing for the SR so that the body diode conduction time of the SR FET can be minimized. A 10-MHz prototype with 18 V input, 5 V/2 A output was built to verify the advantage of the SR with the proposed RGD. At full load of 2 A, the SR with the proposed RGD improves the converter efficiency from 80.2% using the SR with the conventional RGD to 82% (an improvement of 1.8%). Compared to the efficiency of 77.3% using the diode rectification, the efficiency improvement is 4.7%.",
"title": ""
},
{
"docid": "edb0c7dbccf56915f3f347558dc62630",
"text": "In the past few years, the number of fine-art collections that are digitized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multimedia systems to archive and retrieve this pool of data. Measuring the visual similarity between artistic items is an essential step for such multimedia systems, which can benefit more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and find out the best approach to learn the similarity metric based on these features. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a painting's style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.",
"title": ""
},
{
"docid": "0b4c076b80d91eb20ef71e63f17e9654",
"text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.",
"title": ""
},
{
"docid": "6a53ea8f1885257e9be6f4515997d666",
"text": "Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79 ± 0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.",
"title": ""
},
{
"docid": "92b4d9c69969c66a1d523c38fd0495a4",
"text": "A level designer typically creates the levels of a game to cater for a certain set of objectives, or mission. But in procedural content generation, it is common to treat the creation of missions and the generation of levels as two separate concerns. This often leads to generic levels that allow for various missions. However, this also creates a generic impression for the player, because the potential for synergy between the objectives and the level is not utilised. Following up on the mission-space generation concept, as described by Dormans [5], we explore the possibilities of procedurally generating a level from a designer-made mission. We use a generative grammar to transform a mission into a level in a mixed-initiative design setting. We provide two case studies, dungeon levels for a rogue-like game, and platformer levels for a metroidvania game. The generators differ in the way they use the mission to generate the space, but are created with the same tool for content generation based on model transformations. We discuss the differences between the two generation processes and compare it with a parameterized approach.",
"title": ""
},
{
"docid": "b020659b3e1da5d497bffda90c9643c3",
"text": "In this paper, we present two efficient and low power H.264 deblocking filter (DBF) hardware implementations that can be used as part of an H.264 video encoder or decoder for portable applications. The first implementation (DBF_4times4) starts filtering the available edges as soon as a new 4times4 block is ready by using a novel edge filtering order to overlap the execution of DBF module with other modules in the H.264 encoder/decoder. Overlapping the execution of DBF hardware with the execution of the other modules in the H.264 encoder/decoder improves the performance of the H.264 encoder/decoder. The second implementation (DBF_16times16) starts filtering the available edges after a new 16times16 macroblock is ready. Both DBF hardware architectures are implemented in Verilog HDL and both implementations are synthesized to 0.18 mum UMC standard cell library. Both DBF implementations can work at 200 MHz and they can process 30 VGA (640times480) frames per second. DBF_4times4 and DBF_16times16 hardware implementations, excluding on-chip memories, are synthesized to 7.4 K and 5.3 K gates respectively. These gate counts are the lowest among the H.264 DBF hardware implementations presented in the literature. Our hardware implementations are more cost effective solutions for portable applications. DBF_16times16 has 36% less power consumption than DBF_4times4 on a Xilinx Virtex II FPGA on an Arm Versatile PB926EJ-S development board. Therefore, DBF_4times4 hardware can be used in an H.264 encoder or decoder for which the performance is more important, whereas DBF_16times16 hardware can be used in an H.264 encoder or decoder for which the power consumption is more important.",
"title": ""
},
{
"docid": "14a8069c29f38129bc8d84b2b3d1ed16",
"text": "Document similarity measures are crucial components of many text-analysis tasks, including information retrieval, document classification, and document clustering. Conventional measures are brittle: They estimate the surface overlap between documents based on the words they mention and ignore deeper semantic connections. We propose a new measure that assesses similarity at both the lexical and semantic levels, and learns from human judgments how to combine them by using machine-learning techniques. Experiments show that the new measure produces values for documents that are more consistent with people’s judgments than people are with each other. We also use it to classify and cluster large document sets covering different genres and topics, and find that it improves both classification and clustering performance.",
"title": ""
},
{
"docid": "bb8b6d2424ef7709aa1b89bc5d119686",
"text": "We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors. In a held-out test set, our Long Short-Term Memory model gives a mean absolute percentage error of 24.2%, outperforming linear Ridge/Lasso and autoregressive GARCH benchmarks by at least 31%. This evaluation is based on an optimal observation and normalization scheme which maximizes the mutual information between domestic trends and daily volatility in the training set. Our preliminary investigation shows strong promise for better predicting stock behavior via deep learning and neural network models.",
"title": ""
},
{
"docid": "e1298fb35b1bdc3f0d70071a6514a793",
"text": "With the wide availability of GPS trajectory data, sustainable development on understanding travel behaviors has been achieved in recent years. But relatively less attention has been paid to uncovering the trip purposes, i.e., why people make the trips. Unlike to the GPS trajectory data, the trip purposes cannot be easily and directly collected on a large scale, which necessitates the inference of trip purposes automatically. To this end, in this paper, we propose a device-free and novel model called Trip2Vec, which consists of three components. In the first component, it augments the context on trip origins and destinations, respectively, by extracting the information about the nearby point of interest configurations and human activity popularity at particular time periods (i.e., activity period popularity) from two crowdsourced datasets. Such context is well-recognized as the clear clue of trip purposes. In the second component, on the top of the augmented context, a deep embedding approach is developed to get a more semantical and discriminative context representation in the latent space. In the third component, we simply adopt the common clustering algorithm (i.e., K-means) to aggregate trips with similar latent representation, then conduct trip purpose interpretation based on the clustering results, followed by understanding the time-evolving tendency of trip purpose patterns (i.e., profiling) in the city-wide level. Finally, we present extensive experiment results with real-world taxi trajectory and Foursquare check-in data generated in New York City (NYC) to demonstrate the effectiveness of the proposed model, and moreover, the obtained city-wide trip purpose patterns are quite consistent with real situations.",
"title": ""
},
{
"docid": "34d668b50d059c941d2e8df9f1aa038e",
"text": "Deep spiking neural networks are becoming increasingly powerful tools for cognitive computing platforms. However, most of the existing studies on such computing models are developed with limited insights on the underlying hardware implementation, resulting in area and power expensive designs. Although several neuromimetic devices emulating neural operations have been proposed recently, their functionality has been limited to very simple neural models that may prove to be inefficient at complex recognition tasks. In this paper, we venture into the relatively unexplored area of utilizing the inherent device stochasticity of such neuromimetic devices to model complex neural functionalities in a probabilistic framework in the time domain. We consider the implementation of a deep spiking neural network capable of performing high-accuracy and lowlatency classification tasks, where the neural computing unit is enabled by the stochastic switching behavior of a magnetic tunnel junction. The simulation studies indicate an energy improvement of 20× over a baseline CMOS design in 45-nm technology.",
"title": ""
},
{
"docid": "396c9da61a3f7c21544278e0396eb689",
"text": "There are several challenges in down-sizing robots for transportation deployment, diversification of locomotion capabilities tuned for various terrains, and rapid and on-demand manufacturing. In this paper we propose an origami-inspired method of addressing these key issues by designing and manufacturing a foldable, deployable, and self-righting version of the origami robot Tribot. Our latest Tribot prototype can jump as high as 215 mm, five times its height, and roll consecutively on any of its edges with an average step size of 55 mm. The 4 g robot self-deploys nine times of its size when released. A compliant roll cage ensures that the robot self-rights onto two legs after jumping or being deployed and also protects the robot from impacts. A description of our prototype and its design, locomotion modes, and fabrication is followed by demonstrations of its key features.",
"title": ""
},
{
"docid": "6f372d493f6408b17be7b0fb2b6182b5",
"text": "Schinzel-Giedion syndrome is a rare autosomal dominant disorder comprising postnatal growth failure, profound developmental delay, seizures, facial dysmorphisms, genitourinary, skeletal, neurological, and cardiac defects. It was recently revealed that Schinzel-Giedion syndrome is caused by de novo mutations in SETBP1, but there are few reports of this syndrome with molecular confirmation. We describe two unrelated Brazilian patients with Schinzel-Giedion syndrome, one of them carrying a novel mutation. We also present a review of clinical manifestations of the syndrome, comparing our cases to patients reported in literature emphasizing the importance of the facial gestalt associated with neurological involvement for diagnostic suspicion of this syndrome.",
"title": ""
},
{
"docid": "579756fadc472d9e65358d6f263802ca",
"text": "1 I n t r o d u c t i o n The topology optimization method for continuum structures (Bends0e and Kikuchi 1988; or BendsCe 1995, for an overview) has reached a level of maturi ty where it is being applied to many industrial problems and it has widespread academic use, not only for structural optimization problems but also in material, mechanism, electromagnetics and other coupled field design problems. Despite its level of matureness, there still exist a number of problems concerning convergence, checkerboards and mesh-dependence which are subject to debate in the topology optimization community. In this paper, we seek to summarize the current knowledge on these problems and to discuss methods with which they can be avoided. The topology optimization problem consists in finding the subdomain f2s with limited volume V*, included in a predetermined design domain ~2, that optimizes a given objective function f (e.g. compliance). Finding the optimal topology corresponds to finding the connectedness, shape and number of holes such that the objective function is extremized. Introducing a density function p defined on $2, taking the value 1 in ~s and 0 elsewhere, the nested version of the problem can be written as mins.t, p t2f(P) p dz <_ V*, } p ( x ) = 0 o r 1, g x E f 2 , (1) i.e. wherever the displacement function u (or in general, the state function) appears, it has been eliminated through the implicit dependence defined by the equilibrium equation. Typically the topology optimization problem is treated by discretizing (1) by dividing ~ into N finite elements. Usually one approximates the density as element-wise constant and thereby the discretized density can be represented by the N-vector p. Taking p to be constant in each element is practical since integrations over elements can be performed with p outside the integral sign, and consequently one can operate with simple scalings of the usual element stiffness matrices. The discretized 0-1 topology optimization problem becomes",
"title": ""
},
{
"docid": "1d5a91029960f267b49831bee80e348f",
"text": "Deep neural networks (DNNs) have become the dominant technique for acoustic-phonetic modeling due to their markedly improved performance over other models. Despite this, little is understood about the computation they implement in creating phonemic categories from highly variable acoustic signals. In this paper, we analyzed a DNN trained for phoneme recognition and characterized its representational properties, both at the single node and population level in each layer. At the single node level, we found strong selectivity to distinct phonetic features in all layers. Node selectivity to specific manners and places of articulation appeared from the first hidden layer and became more explicit in deeper layers. Furthermore, we found that nodes with similar phonetic feature selectivity were differentially activated to different exemplars of these features. Thus, each node becomes tuned to a particular acoustic manifestation of the same feature, providing an effective representational basis for the formation of invariant phonemic categories. This study reveals that phonetic features organize the activations in different layers of a DNN, a result that mirrors the recent findings of feature encoding in the human auditory system. These insights may provide better understanding of the limitations of current models, leading to new strategies to improve their performance.",
"title": ""
}
] |
scidocsrr
|
3458f2e58387db67f6d19f0a0a76fac6
|
Contrastive Summarization: An Experiment with Consumer Reviews
|
[
{
"docid": "da8e0706b5ca5b7d391a07d443edc0cf",
"text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sources containing such opinions, e.g., product reviews, forums, discussion groups, and blogs. Techniques are now being developed to exploit these sources to help organizations and individuals to gain such important information easily and quickly. In this paper, we first discuss several aspects of the problem in the AI context, and then present some results of our existing work published in KDD-04 and WWW-05.",
"title": ""
},
{
"docid": "9dbea5d01d446bd829085e445f11c5a7",
"text": "We present the results of a large-scale, end-to-end human evaluation of various sentiment summarization models. The evaluation shows that users have a strong preference for summarizers that model sentiment over non-sentiment baselines, but have no broad overall preference between any of the sentiment-based models. However, an analysis of the human judgments suggests that there are identifiable situations where one summarizer is generally preferred over the others. We exploit this fact to build a new summarizer by training a ranking SVM model over the set of human preference judgments that were collected during the evaluation, which results in a 30% relative reduction in error over the previous best summarizer.",
"title": ""
}
] |
[
{
"docid": "6ccd0d743360b18365210456c56efc19",
"text": "Falls are leading cause of injury and death for elderly people. T herefore it is necessary to design a proper fall prevention system to prevent falls at old age The use of MEMS sensor drastically reduces the size of the system which enables the module to be developed as a wearable suite. A special alert notification regarding the fall is activated using twitter. The state of the person can be viewed every 30sec and is well suited for monitoring aged persons. On a typical fall motion the device releases the compressed air module which is to be designed and alarms the concerned.",
"title": ""
},
{
"docid": "7fcfa6b251a20d5bb35516d322ebc6c9",
"text": "Plastic waste disposal is a huge ecotechnological problem and one of the approaches to solving this problem is the development of biodegradable plastics. This review summarizes data on their use, biodegradability, commercial reliability and production from renewable resources. Some commercially successful biodegradable plastics are based on chemical synthesis (i.e. polyglycolic acid, polylactic acid, polycaprolactone, and polyvinyl alcohol). Others are products of microbial fermentations (i.e. polyesters and neutral polysaccharides) or are prepared from chemically modified natural products (e.g., starch, cellulose, chitin or soy protein).",
"title": ""
},
{
"docid": "93175df1463265fcd5a75ec926cc7f1e",
"text": "This study investigated the effect of subtitling modality on incidental vocabulary learning among Iranian EFL learners. To this end, 90 freshmen students studying English Translation at BA level in Abadan Azad University were selected after taking a proficiency test to ensure their homogeneity. Participants were randomly assigned to three experimental groups, namely: Bimodal group (A), Standard group (B) and Reversed group(C). They watched eight video clips selected from three animated movies with different modes of subtitles: A) Bimodal subtitles, B) Standard subtitles and C) Reversed subtitles. Research instrumentation included a pre-test and a post-test following an experimental design. Participants took a pre-test containing new words selected from the clips. After eight treatment sessions, the post-test was administered. Data were analyzed descriptively and inferentially. To arrive at any difference between the three different modes of subtitles, the researcher conducted one-way ANOVA. The results obtained from the tests showed that participants in reversed subtitling group performed significantly different and learned more new vocabulary items. Standard subtitling was the second type of subtitling which revealed to be more effective than bimodal subtitling. © 2014 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "f898a6d7e3a5e9cced5b9da69ef40204",
"text": "Software readability is a property that influences how easily a given piece of code can be read and understood. Since readability can affect maintainability, quality, etc., programmers are very concerned about the readability of code. If automatic readability checkers could be built, they could be integrated into development tool-chains, and thus continually inform developers about the readability level of the code. Unfortunately, readability is a subjective code property, and not amenable to direct automated measurement. In a recently published study, Buse et al. asked 100 participants to rate code snippets by readability, yielding arguably reliable mean readability scores of each snippet; they then built a fairly complex predictive model for these mean scores using a large, diverse set of directly measurable source code properties. We build on this work: we present a simple, intuitive theory of readability, based on size and code entropy, and show how this theory leads to a much sparser, yet statistically significant, model of the mean readability scores produced in Buse's studies. Our model uses well-known size metrics and Halstead metrics, which are easily extracted using a variety of tools. We argue that this approach provides a more theoretically well-founded, practically usable, approach to readability measurement.",
"title": ""
},
{
"docid": "8bd44a21a890e7c44fec4e56ddd39af2",
"text": "This paper focuses on the problem of discovering users' topics of interest on Twitter. While previous efforts in modeling users' topics of interest on Twitter have focused on building a \"bag-of-words\" profile for each user based on his tweets, they overlooked the fact that Twitter users usually publish noisy posts about their lives or create conversation with their friends, which do not relate to their topics of interest. In this paper, we propose a novel framework to address this problem by introducing a modified author-topic model named twitter-user model. For each single tweet, our model uses a latent variable to indicate whether it is related to its author's interest. Experiments on a large dataset we crawled using Twitter API demonstrate that our model outperforms traditional methods in discovering user interest on Twitter.",
"title": ""
},
{
"docid": "8f8d97a8b6443f87bef63e8a15382185",
"text": "Semantic publishing is the use of Web and Semantic Web technologies to enhance the meaning of a published journal article, to facilitate its automated discovery, to enable its linking to semantically related articles, to provide access to data within the article in actionable form, and to facilitate integration of data between articles. Recently, semantic publishing has opened the possibility of a major step forward in the digital publishing world. For this to succeed, new semantic models and visualization tools are required to fully meet the specific needs of authors and publishers. In this article, we introduce the principles and architectures of two new ontologies central to the task of semantic publishing: FaBiO, the FRBR-aligned Bibliographic Ontology, an ontology for recording and publishing bibliographic records of scholarly endeavours on the Semantic Web, and CiTO, the Citation Typing Ontology, an ontology for the characterization of bibliographic citations both factually and rhetorically. We present those two models step by step, in order to emphasise their features and to stress their advantages relative to other pre-existing information models. Finally, we review the uptake of FaBiO and CiTO within the academic and publishing communities.",
"title": ""
},
{
"docid": "0f71e64aaf081b6624f442cb95b2220c",
"text": "Objective\nElectronic health record (EHR)-based phenotyping infers whether a patient has a disease based on the information in his or her EHR. A human-annotated training set with gold-standard disease status labels is usually required to build an algorithm for phenotyping based on a set of predictive features. The time intensiveness of annotation and feature curation severely limits the ability to achieve high-throughput phenotyping. While previous studies have successfully automated feature curation, annotation remains a major bottleneck. In this paper, we present PheNorm, a phenotyping algorithm that does not require expert-labeled samples for training.\n\n\nMethods\nThe most predictive features, such as the number of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes or mentions of the target phenotype, are normalized to resemble a normal mixture distribution with high area under the receiver operating curve (AUC) for prediction. The transformed features are then denoised and combined into a score for accurate disease classification.\n\n\nResults\nWe validated the accuracy of PheNorm with 4 phenotypes: coronary artery disease, rheumatoid arthritis, Crohn's disease, and ulcerative colitis. The AUCs of the PheNorm score reached 0.90, 0.94, 0.95, and 0.94 for the 4 phenotypes, respectively, which were comparable to the accuracy of supervised algorithms trained with sample sizes of 100-300, with no statistically significant difference.\n\n\nConclusion\nThe accuracy of the PheNorm algorithms is on par with algorithms trained with annotated samples. PheNorm fully automates the generation of accurate phenotyping algorithms and demonstrates the capacity for EHR-driven annotations to scale to the next level - phenotypic big data.",
"title": ""
},
{
"docid": "e17558c5a39f3e231aa6d09c8e2124fc",
"text": "Surveys of child sexual abuse in large nonclinical populations of adults have been conducted in at least 19 countries in addition to the United States and Canada, including 10 national probability samples. All studies have found rates in line with comparable North American research, ranging from 7% to 36% for women and 3% to 29% for men. Most studies found females to be abused at 1 1/2 to 3 times the rate for males. Few comparisons among countries are possible because of methodological and definitional differences. However, they clearly confirm sexual abuse to be an international problem.",
"title": ""
},
{
"docid": "1d0ca65e3019850f25445c4c2bbaf75d",
"text": "Cyber-physical systems are deeply intertwined with their corresponding environment through sensors and actuators. To avoid severe accidents with surrounding objects, testing the the behavior of such systems is crucial. Therefore, this paper presents the novel SMARDT (Specification Methodology Applicable to Requirements, Design, and Testing) approach to enable automated test generation based on the requirement specification and design models formalized in SysML. This paper presents and applies the novel SMARDT methodology to develop a self-adaptive software architecture dealing with controlling, planning, environment understanding, and parameter tuning. To formalize our architecture we employ a recently introduced homogeneous model-driven approach for component and connector languages integrating features indispensable in the cyber-physical systems domain. In a compelling case study we show the model driven design of a self-adaptive vehicle robot based on a modular and extensible architecture.",
"title": ""
},
{
"docid": "d47c5f2b5fea54e0f650869d0d45ac25",
"text": "Time-varying, smooth trajectory estimation is of great interest to the vision community for accurate and well behaving 3D systems. In this paper, we propose a novel principal component local regression filter acting directly on the Riemannian manifold of unit dual quaternions DH1. We use a numerically stable Lie algebra of the dual quaternions together with exp and log operators to locally linearize the 6D pose space. Unlike state of the art path smoothing methods which either operate on SO (3) of rotation matrices or the hypersphere H1 of quaternions, we treat the orientation and translation jointly on the dual quaternion quadric in the 7-dimensional real projective space RP7. We provide an outlier-robust IRLS algorithm for generic pose filtering exploiting this manifold structure. Besides our theoretical analysis, our experiments on synthetic and real data show the practical advantages of the manifold aware filtering on pose tracking and smoothing.",
"title": ""
},
{
"docid": "e6567825361e13418a101919cdccce96",
"text": "In this paper, we propose a novel explanation module to explain the predictions made by a deep network. The explanation module works by embedding a high-dimensional deep network layer nonlinearly into a low-dimensional explanation space while retaining faithfulness, so that the original deep learning predictions can be constructed from the few concepts extracted by the explanation module. We then visualize such concepts for human to learn about the high-level concepts that deep learning is using to make decisions. We propose an algorithm called Sparse Reconstruction Autoencoder (SRAE) for learning the embedding to the explanation space. SRAE aims to reconstruct part of the original feature space while retaining faithfulness. A pull-away term is applied to SRAE to make the explanation space more orthogonal. A visualization system is then introduced for human understanding of the features in the explanation space. The proposed method is applied to explain CNN models in image classification tasks, and several novel metrics are introduced to evaluate the performance of explanations quantitatively without human involvement. Experiments show that the proposed approach generates interesting explanations of the mechanisms CNN use for making predictions.",
"title": ""
},
{
"docid": "a900a7b1b6eff406fa42906ec5a31597",
"text": "From wearables to smart appliances, the Internet of Things (IoT) is developing at a rapid pace. The challenge is to find the best fitting solution within a range of different technologies that all may be appropriate at the first sight to realize a specific embedded device. A single tool for measuring power consumption of various wireless technologies and low power modes helps to optimize the development process of modern IoT systems. In this paper, we present an accurate but still cost-effective measurement solution for tracking the highly dynamic power consumption of wireless embedded systems. We extended the conventional measurement of a single shunt resistor's voltage drop by using a dual shunt resistor stage with an automatic switch-over between two stages, which leads to a large dynamic measurement range from μA up to several hundreds mA. To demonstrate the usability of our simple-to-use power measurement system different use cases are presented. Using two independent current measurement channels allows to evaluate the timing relation of proprietary RF communication. Furthermore a forecast is given on the expected battery lifetime of a Wifi-based data acquisition system using measurement results of the presented tool.",
"title": ""
},
{
"docid": "6316963035a6a7bf1e44c7f32d322737",
"text": "InGaAs-InP modified charge compensated uni- traveling carrier photodiodes with both absorbing and nonabsorbing depleted region are demonstrated. The fiber-coupled external quantum efficiency was 60% (responsivity at 1550 nm = 0.75 A/W). A 40-mum-diameter photodiode achieved 14-GHz bandwidth and 25-dBm RF output power and a 20-mum-diameter photodiode exhibited 30-GHz bandwidth and 15.5-dBm RF output power. The saturation current-bandwidth products are 1820 mA ldr GHz and 1560 mA GHz for the 40-mum-diameter and 40-mum-diameter devices, respectively.",
"title": ""
},
{
"docid": "c772cf8db92572c008f45791959cc3cd",
"text": "Video captioning is the task of automatically generating a textual description of the actions in a video. Although previous work (e.g. sequence-to-sequence model) has shown promising results in abstracting a coarse description of a short video, it is still very challenging to caption a video containing multiple fine-grained actions with a detailed description. This paper aims to address the challenge by proposing a novel hierarchical reinforcement learning framework for video captioning, where a high-level Manager module learns to design sub-goals and a low-level Worker module recognizes the primitive actions to fulfill the sub-goal. With this compositional framework to reinforce video captioning at different levels, our approach significantly outperforms all the baseline methods on a newly introduced large-scale dataset for fine-grained video captioning. Furthermore, our non-ensemble model has already achieved the state-of-the-art results on the widely-used MSR-VTT dataset.",
"title": ""
},
{
"docid": "fc821977be6a0c420d73ca76c5249dbd",
"text": "Monitoring volatile organic compound (VOC) pollution levels in indoor environments is of great importance for the health and comfort of individuals, especially considering that people currently spend >80% of their time indoors. The primary aim of this paper is to design a low-power ZigBee sensor network and internode data reception control framework to use in the real-time acquisition and communication of data concerning air pollutant levels from VOCs. The network consists of end device sensors with photoionization detectors, routers that propagate the network over long distances, and a coordinator that communicates with a computer. The design is based on the ATmega16 microcontroller and the Atmel RF230 ZigBee module, which are used to effectively process communication data with low power consumption. Priority is given to power consumption and sensing efficiency, which are achieved by incorporating various smart tasking and power management protocols. The measured data are displayed on a computer monitor through a graphical user interface. The preliminary experimental results demonstrate that the wireless sensor network system can monitor VOC concentrations with a high level of accuracy and is thus suitable for automated environmental monitoring. Both good indoor air quality and energy conservation can be achieved by integrating the VOC monitoring system proposed in this paper with the residential integrated ventilation controller.",
"title": ""
},
{
"docid": "734638df47b05b425b0dcaaab11d886e",
"text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.",
"title": ""
},
{
"docid": "0c12178e7c7d5c66343bb5a152b42fca",
"text": "This study was a randomized controlled trial to investigate the effect of treating women with stress or mixed urinary incontinence (SUI or MUI) by diaphragmatic, deep abdominal and pelvic floor muscle (PFM) retraining. Seventy women were randomly allocated to the training (n = 35) or control group (n = 35). Women in the training group received 8 individual clinical visits and followed a specific exercise program. Women in the control group performed self-monitored PFM exercises at home. The primary outcome measure was self-reported improvement. Secondary outcome measures were 20-min pad test, 3-day voiding diary, maximal vaginal squeeze pressure, holding time and quality of life. After a 4-month intervention period, more participants in the training group reported that they were cured or improved (p < 0.01). The cure/improved rate was above 90%. Both amount of leakage and number of leaks were significantly lower in the training group (p < 0.05) but not in the control group. More aspects of quality of life improved significantly in the training group than in the control group. Maximal vaginal squeeze pressure, however, decreased slightly in both groups. Coordinated retraining diaphragmatic, deep abdominal and PFM function could improve symptoms and quality of life. It may be an alternative management for women with SUI or MUI.",
"title": ""
},
{
"docid": "e5a2c2ef9d2cb6376b18c1e7232016b2",
"text": "In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.",
"title": ""
},
{
"docid": "b86ea36ee5a3b6c27713de3f809841b8",
"text": "From a group of 1,189 AA patients seen in our dermatology unit, thirteen (3 males, 10 females) experienced hair shedding that started profusely and diffusely over the entire scalp. They were under observation for about 5 years, histopathology and trichograms being performed in all instances. The mean age of the patients was 26.7 years. It took only 2.3 months on average from the onset of hair shedding to total denudation of the scalp. The trichogram at the time of diffuse shedding showed that about 80% had dystrophic roots and the remaining 20% had telogen roots. Histopathological findings and exclamation mark hairs were compatible with alopecia areata. Regrowth of hair was noted 3.2 month after the onset of hair shedding and recovery observed in 4.8 months. All patients were treated by methylprednisolone pulse therapy. During the follow-up period, 53 months on average after recovery, 8 of the 13 patients (61.5%) showed normal scalp hair without recurrence, in 4 patients the recovery was cosmetically acceptable in spite of focal recurrences and only 1 patient showed a severe relapse after recovery. Considering all of the above findings, this group of the patients should be delineated by the term acute alopecia totalis.",
"title": ""
},
{
"docid": "b6012b1b5e74825269f9cf16e2f3e6f0",
"text": "GPS enables a management to maintain Staff attendance and employee registration through mobile application, this application facilitates the staffs to login through mobile phone and track other staff members' whereabouts through mobile phone. In the present scenario manual registration through biometric systems is commonly in practice. The staff will be kept on informed about their attendance constantly by the admin when they login and log out so that the staff can keep a track on their attendance by using this application. The admin can track the location of any staff member using latitude, longitude and IMSI number.",
"title": ""
}
] |
scidocsrr
|
f986e008dd0356eb0d22ee5b452600e5
|
Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features
|
[
{
"docid": "fea6d5cffd6b2943fac155231e7e9d89",
"text": "We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported. Spectral graph partitioning methods have been successfully applied to circuit layout [3, 1], load balancing [4] and image segmentation [10, 6]. As a discriminative approach, they do not make assumptions about the global structure of data. Instead, local evidence on how likely two data points belong to the same class is first collected and a global decision is then made to divide all data points into disjunct sets according to some criterion. Often, such a criterion can be interpreted in an embedding framework, where the grouping relationships among data points are preserved as much as possible in a lower-dimensional representation. What makes spectral methods appealing is that their global-optima in the relaxed continuous domain are obtained by eigendecomposition. However, to get a discrete solution from eigenvectors often requires solving another clustering problem, albeit in a lower-dimensional space. That is, eigenvectors are treated as geometrical coordinates of a point set. Various clustering heuristics such as Kmeans [10, 9], transportation [2], dynamic programming [1], greedy pruning or exhaustive search [3, 10] are subsequently employed on the new point set to retrieve partitions. We show that there is a principled way to recover a discrete optimum. This is based on a fact that the continuous optima consist not only of the eigenvectors, but of a whole family spanned by the eigenvectors through orthonormal transforms. The goal is to find the right orthonormal transform that leads to a discretization.",
"title": ""
}
] |
[
{
"docid": "b315c15c87e17229457809a724450e2e",
"text": "In the Spreadsheet Space, a virtual space for tabular data sharing, users in diverse administrative domains reference each others' spreadsheets at the cell level as well as share data from a variety of platforms and databases. Spreadsheet Space users enjoy the unlimited scope of cloud collaboration with the security of a more limited environment.",
"title": ""
},
{
"docid": "acd7a0c781003597883b453cbb816ead",
"text": "This paper presents the design techniques and realization examples of innovative multilayer substrate integrated waveguide (SIW) structures for integrated wireless system applications. Such multilayered SIW implementation presents numerous advantages such as low profile, light weight, wideband characteristics, and easy integration with other devices and components. In this paper, the state-of-the-art of multilayer SIW passive components for low-cost high-density integrated transceiver design are presented. Filters, couplers, phase shifters, power dividers, and antenna arrays designed for specific applications are discussed and the advantages gained from multilayer schemes are described. Despite their easy fabrications and outstanding performances, these technologies are still struggling to compete with others for potential mainstream solutions. In this paper, we also discuss challenging issues in the development of multilayer SIW integrated modules that should enable a near-future successful widespread deployment.",
"title": ""
},
{
"docid": "d6a611e139ea5df8d83999b2a1234bd5",
"text": "Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "f8724f8166eeb48461f9f4ac8fdd87d3",
"text": "The simultaneous use of images from different spectra can be helpful to improve the performance of many computer vision tasks. The core idea behind the usage of crossspectral approaches is to take advantage of the strengths of each spectral band providing a richer representation of a scene, which cannot be obtained with just images from one spectral band. In this work we tackle the cross-spectral image similarity problem by using Convolutional Neural Networks (CNNs). We explore three different CNN architectures to compare the similarity of cross-spectral image patches. Specifically, we train each network with images from the visible and the near-infrared spectrum, and then test the result with two public cross-spectral datasets. Experimental results show that CNN approaches outperform the current state-of-art on both cross-spectral datasets. Additionally, our experiments show that some CNN architectures are capable of generalizing between different crossspectral domains.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "26b7380379094803b9a46a4742bcafad",
"text": "Entity resolution, the task of automatically determining which mentions refer to the same real-world entity, is a crucial aspect of knowledge base construction and management. However, performing entity resolution at large scales is challenging because (1) the inference algorithms must cope with unavoidable system scalability issues and (2) the search space grows exponentially in the number of mentions. Current conventional wisdom has been that performing coreference at these scales requires decomposing the problem by first solving the simpler task of entity-linking (matching a set of mentions to a known set of KB entities), and then performing entity discovery as a post-processing step (to identify new entities not present in the KB). However, we argue that this traditional approach is harmful to both entity-linking and overall coreference accuracy. Therefore, we embrace the challenge of jointly modeling entity-linking and entity-discovery as a single entity resolution problem. In order to make progress towards scalability we (1) present a model that reasons over compact hierarchical entity representations, and (2) propose a novel distributed inference architecture that does not suffer from the synchronicity bottleneck which is inherent in map-reduce architectures. We demonstrate that more test-time data actually improves the accuracy of coreference, and show that joint coreference is substantially more accurate than traditional entity-linking, reducing error by 75%.",
"title": ""
},
{
"docid": "469e5c159900b9d6662a9bfe9e01fde7",
"text": "In the research of rule extraction from neural networks,fidelity describes how well the rules mimic the behavior of a neural network whileaccuracy describes how well the rules can be generalized. This paper identifies thefidelity-accuracy dilemma. It argues to distinguishrule extraction using neural networks andrule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.",
"title": ""
},
{
"docid": "14c3d8cee12007dc8af75c7e0df77f00",
"text": "A modular magic sudoku solution is a sudoku solution with symbols in {0, 1, ..., 8} such that rows, columns, and diagonals of each subsquare add to zero modulo nine. We count these sudoku solutions by using the action of a suitable symmetry group and we also describe maximal mutually orthogonal families.",
"title": ""
},
{
"docid": "09b77e632fb0e5dfd7702905e51fc706",
"text": "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.",
"title": ""
},
{
"docid": "976f97f5b64080cf48da206fef3acb27",
"text": "One of the primary architectural principles behind the Internet is the use of distributed protocols, which facilitates fault tolerance and distributed management. Unfortunately, having nodes (i.e., switches and routers) perform control decisions independently makes it difficult to control the network or even understand or debug its overall emergent behavior. As a result, networks are often inefficient, unstable, and fragile. This Internet architecture also poses a significant, often insurmountable, challenge to the deployment of new protocols and evolution of existing ones. Software defined networking (SDN) is a recent networking architecture with promising properties relative to these weaknesses in traditional networks. SDN decouples the control plane, which makes the network forwarding decisions, from the data plane, which mainly forwards the data. This decoupling enables more centralized control where coordinated decisions directly guide the network to desired operating conditions. Moreover, decoupling the control enables graceful evolution of protocols, and the deployment of new protocols without having to replace the data plane switches. In this survey, we review recent work that leverages SDN in wireless network settings, where they are not currently widely adopted or well understood. More specifically, we evaluate the use of SDN in four classes of popular wireless networks: cellular, sensor, mesh, and home networks. We classify the different advantages that can be obtained by using SDN across this range of networks, and hope that this classification identifies unexplored opportunities for using SDN to improve the operation and performance of wireless networks.",
"title": ""
},
{
"docid": "eda607a60321038e75104bf555856d4f",
"text": "Knee injuries occur commonly in sports, limiting field and practice time and performance level. Although injury etiology relates primarily to sports specific activity, female athletes are at higher risk of knee injury than their male counterparts in jumping and cutting sports. Particular pain syndromes such as anterior knee pain and injuries such as noncontact anterior cruciate ligament (ACL) injuries occur at a higher rate in female than male athletes at a similar level of competition. Anterior cruciate ligament injuries can be season or career ending, at times requiring costly surgery and rehabilitation. Beyond real-time pain and functional limitations, previous injury is implicated in knee osteoarthritis occurring later in life. Although anatomical parameters differ between and within the sexes, it is not likely this is the single reason for knee injury rate disparities. Clinicians and researchers have also studied the role of sex hormones and dynamic neuromuscular imbalances in female compared with male athletes in hopes of finding the causes for the increased rate of ACL injury. Understanding gender differences in knee injuries will lead to more effective prevention strategies for women athletes who currently suffer thousands of ACL tears annually. To meet the goal in sports medicine of safely returning an athlete to her sport, our evaluation, assessment, treatments and prevention strategies must reflect not only our knowledge of the structure and innervations of the knee but neuromuscular control in multiple planes and with multiple forces while at play.",
"title": ""
},
{
"docid": "a7d3c1a4089d55461f9c74a345883f63",
"text": "Robots that can easily interact with humans and move through natural environments are becoming increasingly essential as assistive devices in the home, office and hospital. These machines need to be safe, effective, and easy to control. One strategy towards accomplishing these goals is to build the robots using soft and flexible materials to make them much more approachable and less likely to damage their environment. A major challenge is that comparatively little is known about how best to design, fabricate and control deformable machines. Here we describe the design, fabrication and control of a novel soft robotic platform (Softworms) as a modular device for research, education and public outreach. These robots are inspired by recent neuromechanical studies of crawling and climbing by larval moths and butterflies (Lepidoptera, caterpillars). Unlike most soft robots currently under development, the Softworms do not rely on pneumatic or fluidic actuators but are electrically powered and actuated using either shape-memory alloy microcoils or motor tendons, and they can be modified to accept other muscle-like actuators such as electroactive polymers. The technology is extremely versatile, and different designs can be quickly and cheaply fabricated by casting elastomeric polymers or by direct 3D printing. Softworms can crawl, inch or roll, and they are steerable and even climb steep inclines. Softworms can be made in any shape but here we describe modular and monolithic designs requiring little assembly. These modules can be combined to make multi-limbed devices. We also describe two approaches for controlling such highly deformable structures using either model-free state transition-reward matrices or distributed, mechanically coupled oscillators. In addition to their value as a research platform, these robots can be developed for use in environmental, medical and space applications where cheap, lightweight and shape-changing deformable robots will provide new performance capabilities.",
"title": ""
},
{
"docid": "07153810148e93a0bc0b62a6de77594c",
"text": "Six healthy young male volunteers at a contract research organization were enrolled in the first phase 1 clinical trial of TGN1412, a novel superagonist anti-CD28 monoclonal antibody that directly stimulates T cells. Within 90 minutes after receiving a single intravenous dose of the drug, all six volunteers had a systemic inflammatory response characterized by a rapid induction of proinflammatory cytokines and accompanied by headache, myalgias, nausea, diarrhea, erythema, vasodilatation, and hypotension. Within 12 to 16 hours after infusion, they became critically ill, with pulmonary infiltrates and lung injury, renal failure, and disseminated intravascular coagulation. Severe and unexpected depletion of lymphocytes and monocytes occurred within 24 hours after infusion. All six patients were transferred to the care of the authors at an intensive care unit at a public hospital, where they received intensive cardiopulmonary support (including dialysis), high-dose methylprednisolone, and an anti-interleukin-2 receptor antagonist antibody. Prolonged cardiovascular shock and acute respiratory distress syndrome developed in two patients, who required intensive organ support for 8 and 16 days. Despite evidence of the multiple cytokine-release syndrome, all six patients survived. Documentation of the clinical course occurring over the 30 days after infusion offers insight into the systemic inflammatory response syndrome in the absence of contaminating pathogens, endotoxin, or underlying disease.",
"title": ""
},
{
"docid": "592431c03450be59f10e56dcabed0ebf",
"text": "Recent advances in machine learning have led to innovative applications and services that use computational structures to reason about complex phenomenon. Over the past several years, the security and machine-learning communities have developed novel techniques for constructing adversarial samples--malicious inputs crafted to mislead (and therefore corrupt the integrity of) systems built on computationally learned models. The authors consider the underlying causes of adversarial samples and the future countermeasures that might mitigate them.",
"title": ""
},
{
"docid": "1d4201c3e0c86c8fad74003be243afb7",
"text": "BACKGROUND\nTwo primary factors that contribute to obesity are unhealthy eating and sedentary behavior. These behaviors are particularly difficult to change in the long-term because they are often enacted habitually. Cognitive Remediation Therapy has been modified and applied to the treatment of obesity (CRT-O) with preliminary results of a randomized controlled trial demonstrating significant weight loss and improvements in executive function. The objective of this study was to conduct a secondary data analysis of the CRT-O trial to evaluate whether CRT-O reduces unhealthy habits that contribute to obesity via improvements in executive function.\n\n\nMETHOD\nEighty participants with obesity were randomized to CRT-O or control. Measures of executive function (Wisconsin Card Sort Task and Trail Making Task) and unhealthy eating and sedentary behavior habits were administered at baseline, post-intervention and at 3 month follow-up.\n\n\nRESULTS\nParticipants receiving CRT-O demonstrated improvements in both measures of executive function and reductions in both unhealthy habit outcomes compared to control. Mediation analyses revealed that change in one element of executive function performance (Wisconsin Card Sort Task perseverance errors) mediated the effect of CRT-O on changes in both habit outcomes.\n\n\nCONCLUSION\nThese results suggest that the effectiveness of CRT-O may result from the disruption of unhealthy habits made possible by improvements in executive function. In particular, it appears that cognitive flexibility, as measured by the Wisconsin Card Sort task, is a key mechanism in this process. Improving cognitive flexibility may enable individuals to capitalise on interruptions in unhealthy habits by adjusting their behavior in line with their weight loss goals rather than persisting with an unhealthy choice.\n\n\nTRIAL REGISTRATION\nThe RCT was registered with the Australian New Zealand Registry of Clinical Trials (trial id: ACTRN12613000537752 ).",
"title": ""
},
{
"docid": "28352dd6b60b511ff812820f4e712cde",
"text": "Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called \"AnnexML\". At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have larger a label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, which is a state-of-the-art embedding-based method.",
"title": ""
},
{
"docid": "cbafadf539a05b13e77b440b7e07993e",
"text": "In this paper, we present a system, UTTime, which we submitted to TempEval-3 for Task C: Annotating temporal relations. The system uses logistic regression classifiers and exploits features extracted from a deep syntactic parser, including paths between event words in phrase structure trees and their path lengths, and paths between event words in predicateargument structures and their subgraphs. UTTime achieved an F1 score of 34.9 based on the graphed-based evaluation for Task C (ranked 2) and 56.45 for Task C-relationonly (ranked 1) in the TempEval-3 evaluation.",
"title": ""
},
{
"docid": "1a747f8474841b6b99184487994ad6a2",
"text": "This paper discusses the effects of multivariate correlation analysis on the DDoS detection and proposes an example, a covariance analysis model for detecting SYN flooding attacks. The simulation results show that this method is highly accurate in detecting malicious network traffic in DDoS attacks of different intensities. This method can effectively differentiate between normal and attack traffic. Indeed, this method can detect even very subtle attacks only slightly different from the normal behaviors. The linear complexity of the method makes its real time detection practical. The covariance model in this paper to some extent verifies the effectiveness of multivariate correlation analysis for DDoS detection. Some open issues still exist in this model for further research.",
"title": ""
},
{
"docid": "6dfe30408b960c8b65ee0e02ba1f2152",
"text": "The induction motor plays very important role in industrial sectors, primarily due to its robustness and lowcost. When the mechanical load is applied to induction motor which requires speed control, some of the drive and controlstrategies are based on the estimated axis speed of the motor. The speed measurement directly reduces its stability and increases the cost of drive implementation. This paper proposes an alternative methodology for estimating the speed of a three phase induction motor driven by a voltage source inverter, using space vector modulation under the scalar control strategy and based on artificial intelligent controller such as fuzzy, neural and adaptive neuro fuzzy system. To validate the performance of the proposed method under motor load torque and speed reference set point variations, simulation results are presented. The simulation results validates under no load, constant load as well as variable load of the proposed method and confirms ability to use induction motor speed control inpractice. The dynamic modeling, simulation and analysis of induction motor using Conventional, PI, Fuzzy Controller, ANN and Adaptive neuro fuzzy controller in open loop and closed loop have been presented.",
"title": ""
}
] |
scidocsrr
|
2bf693f9482c65571fdcb5181fa46cd1
|
Real-time forest fire detection with wireless sensor networks
|
[
{
"docid": "f7b8956748e8c19468490f35ed764e4e",
"text": "We show how the database community’s notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data-reduction tool; networking approaches, however, have focused on application specific solutions, whereas our innetwork aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and",
"title": ""
}
] |
[
{
"docid": "552a1dae3152fcc2c19a83eb26bc1021",
"text": "Several new algorithms for camera-based fall detection have been proposed in the literature recently, with the aim to monitor older people at home so nurses or family members can be warned in case of a fall incident. However, these algorithms are evaluated almost exclusively on data captured in controlled environments, under optimal conditions (simple scenes, perfect illumination and setup of cameras), and with falls simulated by actors. In contrast, we collected a dataset based on real life data, recorded at the place of residence of four older persons over several months. We showed that this poses a significantly harder challenge than the datasets used earlier. The image quality is typically low. Falls are rare and vary a lot both in speed and nature. We investigated the variation in environment parameters and context during the fall incidents. We found that various complicating factors, such as moving furniture or the use of walking aids, are very common yet almost unaddressed in the literature. Under such circumstances and given the large variability of the data in combination with the limited number of examples available to train the system, we posit that simple yet robust methods incorporating, where available, domain knowledge (e.g. the fact that the background is static or that a fall usually involves a downward motion) seem to be most promising. Based on these observations, we propose a new fall detection system. It is based on background subtraction and simple measures extracted from the dominant foreground object such as aspect ratio, fall angle and head speed. We discuss the results obtained, with special emphasis on particular difficulties encountered under real world circumstances.",
"title": ""
},
{
"docid": "2b7d91c38a140628199cbdbee65c008a",
"text": "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and/or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot.",
"title": ""
},
{
"docid": "a7e1d937d17e46bed14158776785bce8",
"text": "Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework.",
"title": ""
},
{
"docid": "3ab831fdb5da974fa56ad412882a4283",
"text": "Irregular streaks are important clues for Melanoma (a potentially fatal form of skin cancer) diagnosis using dermoscopy images. This paper extends our previous algorithm to identify the absence or presence of streaks in a skin lesions, by further analyzing the appearance of detected streak lines, and performing a three-way classification for streaks, Absent, Regular, and Irregular, in a pigmented skin lesion. In addition, the directional pattern of detected lines is analyzed to extract their orientation features in order to detect the underlying pattern. The method uses a graphical representation to model the geometric pattern of valid streaks and the distribution and coverage of the structure. Using these proposed features of the valid streaks along with the color and texture features of the entire lesion, an accuracy of 76.1% and weighted average area under ROC curve (AUC) of 85% is achieved for classifying dermoscopy images into streaks Absent, Regular, or Irregular on 945 images compiled from atlases and the internet without any exclusion criteria. This challenging dataset is the largest validation dataset for streaks detection and classification published to date. The data set has also been applied to the two-class sub-problems of Absent/Present classification (accuracy of 78.3% with AUC of 83.2%) and to Regular/Irregular classification (accuracy 83.6% with AUC of 88.9%). When the method was tested on a cleaned subset of 300 images randomly selected from the 945 images, the AUC increased to 91.8%, 93.2% and 90.9% for the Absent/Regular/Irregular, Absent/Present, and Regular/Irregular problems, respectively.",
"title": ""
},
{
"docid": "fb1f467ab11bb4c01a9e410bf84ac258",
"text": "The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.",
"title": ""
},
{
"docid": "6cab942e78a957f3217971dd4721e3b2",
"text": "(Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics—which has traditionally focused on ethical issues surrounding humans’ use of machines—machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.",
"title": ""
},
{
"docid": "b8f50ba62325ffddcefda7030515fd22",
"text": "The following statement is intended to provide an understanding of the governance and legal structure of the University of Sheffield. The University is an independent corporation whose legal status derives from a Royal Charter granted in 1905. It is an educational charity, with exempt status, regulated by the Office for Students in its capacity as Principal Regulator. The University has charitable purposes and applies them for the public benefit. It must comply with the general law of charity. The University’s objectives, powers and governance framework are set out in its Charter and supporting Statutes and Regulations.",
"title": ""
},
{
"docid": "4645d0d7b1dfae80657f75d3751ef72a",
"text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.",
"title": ""
},
{
"docid": "c15f36dccebee50056381c41e6ddb2dc",
"text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.",
"title": ""
},
{
"docid": "6559d77de48d153153ce77b0e2969793",
"text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.",
"title": ""
},
{
"docid": "2c68945d68f8ccf90648bec7fd5b0547",
"text": "The number of seniors and other people needing daily assistance continues to increase, but the current human resources available to achieve this in the coming years will certainly be insufficient. To remedy this situation, smart habitats have emerged as an innovative avenue for supporting needs of daily assistance. Smart homes aim to provide cognitive assistance in decision making by giving hints, suggestions, and reminders, with different kinds of effectors, to residents. To implement such technology, the first challenge to overcome is the recognition of ongoing activity. Some researchers have proposed solutions based on binary sensors or cameras, but these types of approaches infringed on residents' privacy. A new affordable activity-recognition system based on passive RFID technology can detect errors related to cognitive impairment. The entire system relies on an innovative model of elliptical trilateration with several filters, as well as on an ingenious representation of activities with spatial zones. The authors have deployed the system in a real smart-home prototype; this article renders the results of a complete set of experiments conducted on this new activity-recognition system with real scenarios.",
"title": ""
},
{
"docid": "c51e1b845d631e6d1b9328510ef41ea0",
"text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.",
"title": ""
},
{
"docid": "af1257e27c0a6010a902e78dc8301df4",
"text": "A 20-MHz to 3-GHz wide-range multiphase delay-locked loop (DLL) has been realized in 90-nm CMOS technology. The proposed delay cell extends the operation frequency range. A scaling circuit is adopted to lower the large delay gain when the frequency of the input clock is low. The core area of this DLL is 0.005 mm2. The measured power consumption values are 0.4 and 3.6 mW for input clocks of 20 MHz and 3 GHz, respectively. The measured peak-to-peak and root-mean-square jitters are 2.3 and 16 ps at 3 GHz, respectively.",
"title": ""
},
{
"docid": "2fb6392a161cf64b1fe009dd8db99857",
"text": "Humans have an incredible capacity to learn properties of objects by pure tactile exploration with their two hands. With robots moving into human-centred environment, tactile exploration becomes more and more important as vision may be occluded easily by obstacles or fail because of different illumination conditions. In this paper, we present our first results on bimanual compliant tactile exploration, with the goal to identify objects and grasp them. An exploration strategy is proposed to guide the motion of the two arms and fingers along the object. From this tactile exploration, a point cloud is obtained for each object. As the point cloud is intrinsically noisy and un-uniformly distributed, a filter based on Gaussian Processes is proposed to smooth the data. This data is used at runtime for object identification. Experiments on an iCub humanoid robot have been conducted to validate our approach.",
"title": ""
},
{
"docid": "ede8b89c37c10313a84ce0d0d21af8fc",
"text": "The adaptive fuzzy and fuzzy neural models are being widely used for identification of dynamic systems. This paper describes different fuzzy logic and neural fuzzy models. The robustness of models has further been checked by Simulink implementation of the models with application to the problem of system identification. The approach is to identify the system by minimizing the cost function using parameters update.",
"title": ""
},
{
"docid": "c3e8960170cb72f711263e7503a56684",
"text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.",
"title": ""
},
{
"docid": "088308b06392780058dd8fa1686c5c35",
"text": "Every company should be able to demonstrate own efficiency and effectiveness by used metrics or other processes and standards. Businesses may be missing a direct comparison with competitors in the industry, which is only possible using appropriately chosen instruments, whether financial or non-financial. The main purpose of this study is to describe and compare the approaches of the individual authors. to find metric from reviewed studies which organization use to measuring own marketing activities with following separating into financial metrics and non-financial metrics. The paper presents advance in useable metrics, especially financial and non-financial metrics. Selected studies, focusing on different branches and different metrics, were analyzed by the authors. The results of the study is describing relevant metrics to prove efficiency in varied types of organizations in connection with marketing effectiveness. The studies also outline the potential methods for further research focusing on the application of metrics in a diverse environment. The study contributes to a clearer idea of how to measure performance and effectiveness.",
"title": ""
},
{
"docid": "f267030a7ff5a8b4b87b9b5418ec3c28",
"text": "Vision systems employing region segmentation by color are crucial in real-time mobile robot applications, such as RoboCup[1], or other domains where interaction with humans or a dynamic world is required. Traditionally, systems employing real-time color-based segmentation are either implemented in hardware, or as very specific software systems that take advantage of domain knowledge to attain the necessary efficiency. However, we have found that with careful attention to algorithm efficiency, fast color image segmentation can be accomplished using commodity image capture and CPU hardware. Our paper describes a system capable of tracking several hundred regions of up to 32 colors at 30 Hertz on general purpose commodity hardware. The software system is composed of four main parts; a novel implementation of a threshold classifier, a merging system to form regions through connected components, a separation and sorting system that gathers various region features, and a top down merging heuristic to approximate perceptual grouping. A key to the efficiency of our approach is a new method for accomplishing color space thresholding that enables a pixel to be classified into one or more of up to 32 colors using only two logical AND operations. A naive approach could require up to 192 comparisons for the same classification. The algorithms and representations are described, as well as descriptions of three applications in which it has been used.",
"title": ""
},
{
"docid": "7b5f0c88eaf8c23b8e2489e140d0022f",
"text": "Deep learning has been integrated into several existing left ventricle (LV) endocardium segmentation methods to yield impressive accuracy improvements. However, challenges remain for segmentation of LV epicardium due to its fuzzier appearance and complications from the right ventricular insertion points. Segmenting the myocardium collectively (i.e., endocardium and epicardium together) confers the potential for better segmentation results. In this work, we develop a computational platform based on deep learning to segment the whole LV myocardium simultaneously from a cardiac magnetic resonance (CMR) image. The deep convolutional network is constructed using Caffe platform, which consists of 6 convolutional layers, 2 pooling layers, and 1 de-convolutional layer. A preliminary result with Dice metric of 0.75±0.04 is reported on York MR dataset. While in its current form, our proposed one-step deep learning method cannot compete with state-of-art myocardium segmentation methods, it delivers promising first pass segmentation results.",
"title": ""
}
] |
scidocsrr
|
421197d9fe9b03376cf685e6407cf3bb
|
Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next
|
[
{
"docid": "f0d3a2b2f3ca6223cab0e222da21fb54",
"text": "We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
},
{
"docid": "da816b4a0aea96feceefe22a67c45be4",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
}
] |
[
{
"docid": "bd4d6e83ccf5da959dac5bbc174d9d6f",
"text": "This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.",
"title": ""
},
{
"docid": "8d9f65aadba86c29cb19cd9e6eecec5a",
"text": "To achieve privacy requirements, IoT application providers may need to spend a lot of money to replace existing IoT devices. To address this problem, this study proposes the Blockchain Connected Gateways (BC Gateways) to protect users from providing personal data to IoT devices without user consent. In addition, the gateways store user privacy preferences on IoT devices in the blockchain network. Therefore, this study can utilize the blockchain technology to resolve the disputes of privacy issues. In conclusion, this paper can contribute to improving user privacy and trust in IoT applications with legacy IoT devices.",
"title": ""
},
{
"docid": "4c42268cc7089ade6eefa4e25646e250",
"text": "This work describes a color Vision-based System intended to perform stable autonomous driving on unmarked roads. Accordingly, this implies the development of an accurate road surface detection system that ensures vehicle stability. Although this topic has already been documented in the technical literature by different research groups, the vast majority of the already existing Intelligent Transportation Systems are devoted to assisted driving of vehicles on marked extra urban roads and highways. The complete system was tested on the BABIECA prototype vehicle, which was autonomously driven for hundred of kilometers accomplishing different navigation missions on a private circuit that emulates an urban quarter. During the tests, the navigation system demonstrated its robustness with regard to shadows, road texture, and weather and changing illumination conditions.",
"title": ""
},
{
"docid": "87d642c0f5e6fd954508f500e9e6892f",
"text": "The wide-angle lenses (or rather zoom lenses when set to short focal length) typically produce a pronounced barrel distortion. This lens distortion affects damage mappings (i. e. the superposition of damage photographs) as well as perspective rectifications. Lens distortion can however be mostly corrected by applying suitable algorithmic transformations to the digital photograph. The paper presents the algorithms used for this correction, together with programs that perform either the entire task of correction or that allow one to determine the lens correction parameters. The paper concludes with some (rectified) example images and an estimation of the gains in accuracy achieved by applying lens correction algorithms.",
"title": ""
},
{
"docid": "112ecbb8547619577962298fbe65eae1",
"text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box, Category-Partition testing, we propose a methodology and a tool based on machine learning that has shown promising results on a case study involving students as testers. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6f3ec6136c57bcd00ba810c9af7493d6",
"text": "In this article, we summarize some of the recent advancements in assistive technologies that are designed for people with visual impairments (VI) and blind people. Present technology enables applications to be actively disseminated and to efficiently operate on handheld mobile devices. These applications include also those that require high computational requirements. As a consequent, digital travel aids, visual sensing modules, text-to-speech applications, navigational assistance tools, and the combination with diverse assistive haptic devices are becoming consolidated with typical mobile devices. This direction has opened diversity of new perspectives for practical training and rehabilitation of people with VI. The aim of this article is to give an overview about the recent developments of assistive applications designed for people with VI. In conclusion, we recommend designing a unified robust system for people with VI which provides the support of the different kind of services.",
"title": ""
},
{
"docid": "fe3029a9e54f068a1387014778c1128d",
"text": "We propose a simple, scalable, and non-parametric approach for short text classification. Leveraging the well studied and scalable Information Retrieval (IR) framework, our approach mimics human labeling process for a piece of short text. It first selects the most representative and topical-indicative words from a given short text as query words, and then searches for a small set of labeled short texts best matching the query words. The predicted category label is the majority vote of the search results. Evaluated on a collection of more than 12K Web snippets, the proposed approach achieves comparable classification accuracy with the baseline Maximum Entropy classifier using as few as 3 query words and top-5 best matching search hits. Among the four query word selection schemes proposed and evaluated in our experiments, term frequency together with clarity gives the best classification accuracy.",
"title": ""
},
{
"docid": "6d2d9de5db5b03a98a26efc8453588d8",
"text": "In this paper we describe a system for use on a mobile robot that detects potential loop closures using both the visual and spatial appearance of the local scene. Loop closing is the act of correctly asserting that a vehicle has returned to a previously visited location. It is an important component in the search to make SLAM (Simultaneous Localization and Mapping) the reliable technology it should be. Paradoxically, it is hardest in the presence of substantial errors in vehicle pose estimates which is exactly when it is needed most. The contribution of this paper is to show how a principled and robust description of local spatial appearance (using laser rangefinder data) can be combined with a purely camera based system to produce superior performance. Individual spatial components (segments) of the local structure are described using a rotationally invariant shape descriptor and salient aspects thereof, and entropy as measure of their innate complexity. Comparisons between scenes are made using relative entropy and by examining the mutual arrangement of groups of segments. We show the inclusion of spatial information allows the resolution of ambiguities stemming from repetitive visual artifacts in urban settings. Importantly the method we present is entirely independent of the navigation and or mapping process and so is entirely unaffected by gross errors in pose estimation.",
"title": ""
},
{
"docid": "0b41c2e8be4b9880a834b44375eb6c75",
"text": "We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot.",
"title": ""
},
{
"docid": "69d32f5e6a6612770cd50b20e5e7f802",
"text": "In this paper we present an approach for efficiently retrieving the most similar image, based on point-to-point correspondences, within a sequence that has been acquired through continuous camera movement. Our approach is entailed to the use of standardized binary feature descriptors and exploits the temporal form of the input data to dynamically adapt the search structure. While being straightforward to implement, our method exhibits very fast response times and its Precision/Recall rates compete with state of the art approaches. Our claims are supported by multiple large scale experiments on publicly available datasets.",
"title": ""
},
{
"docid": "85736b2fd608e3d109ce0f3c46dda9ac",
"text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.",
"title": ""
},
{
"docid": "304315feeb6e21149a9c7a3c7c7c372e",
"text": "In the future battlefields, communication between commanders and soldiers will be a decisive factor to complete an assigned mission. In such military tactical scenarios, network topology is constrained by the dynamics of dismounted soldiers in the battlefield. In the battlefield area, soldiers may be divided into a number of squads and fire teams with each one having its own mission, especially in some critical situation (e.g., a military response to an enemy attack or a sweep operation of houses). This situation may cause an unpredictable behavior in terms of wireless network topology state, thus increasing the susceptibility of network topology to decomposition in multiple components. This paper presents a Group Mobility Model simulating realistic battlefield behaviors and movement techniques. We also analyze wireless communication between dismounted soldiers and their squad leader deployed in a mobile ad hoc network (MANET) under different packet sending rate and perturbation factor modeled as a standard deviation parameter which may affect soldiers' mobility. A discussion of results follows, using several performance metrics according to network behavior (such as throughput, relaying rate of unrelated packets and path length).",
"title": ""
},
{
"docid": "1858a8b385ce201a1542c969b1279cf9",
"text": "Topics generated by topic models are typically represented as list of terms. To reduce the cognitive overhead of interpreting these topics for end-users, we propose labelling a topic with a succinct phrase that summarises its theme or idea. Using Wikipedia document titles as label candidates, we compute neural embeddings for documents and words to select the most relevant labels for topics. Compared to a state-of-the-art topic labelling system, our methodology is simpler, more efficient, and finds better topic labels.",
"title": ""
},
{
"docid": "f160e297ece985bd23b72cc5eef1b11d",
"text": "We propose to exploit reconstruction as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong nonlinearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.",
"title": ""
},
{
"docid": "adb9c43bb23ca4737aebbb9ee4b6c14e",
"text": "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.",
"title": ""
},
{
"docid": "2ac5e5c4b6d484e5147ec23de501c0ff",
"text": "Data-centric abstractions and execution strategies are needed to exploit parallelism in large-scale graph analytics.",
"title": ""
},
{
"docid": "346349308d49ac2d3bb1cfa5cc1b429c",
"text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.",
"title": ""
},
{
"docid": "6ae4c49007a36a6aa0b9768599b3428a",
"text": "Server providers that support e-commerce applications as a service for multiple e-commerce Web sites traditionally use a tiered server architecture. This architecture includes an application tier to process requests for dynamically generated content. How this tier is provisioned can significantly impact a provider's profit margin. In this article we study methods to provision servers in the application serving tier that increase a server provider's profits. First, we examine actual traces of request arrivals to the application tier of an e-commerce site, and show that the arrival process is effectively Poisson. Next, we construct an optimization problem in the context of a set of application servers modeled as M/G/1/PS queueing systems, and derive three simple methods that approximate the allocation that maximizes profits. Simulation results demonstrate that our approximation methods achieve profits that are close to optimal, and are significantly higher than those achieved via simple heuristics.",
"title": ""
},
{
"docid": "9bb7c151a257ea7368af8b5e02c8fde9",
"text": "The breakthroughs in single molecule spectroscopy of the last decade and the recent advances in super resolution microscopy have boosted the popularity of cyanine dyes in biophysical research. These applications have motivated the investigation of the reactions and relaxation processes that cyanines undergo in their electronically excited states. Studies show that the triplet state is a key intermediate in the photochemical reactions that limit the photostability of cyanine dyes. The removal of oxygen greatly reduces photobleaching, but induces rapid intensity fluctuations (blinking). The existence of non-fluorescent states lasting from milliseconds to seconds was early identified as a limitation in single-molecule spectroscopy and a potential source of artifacts. Recent studies demonstrate that a combination of oxidizing and reducing agents is the most efficient way of guaranteeing that the ground state is recovered rapidly and efficiently. Thiol-containing reducing agents have been identified as the source of long-lived dark states in some cyanines that can be photochemically switched back to the emissive state. The mechanism of this process is the reversible addition of the thiol-containing compound to a double bond in the polymethine chain resulting in a non-fluorescent molecule. This process can be reverted by irradiation at shorter wavelengths. Another mechanism that leads to non-fluorescent states in cyanine dyes is cis-trans isomerization from the singlet-excited state. This process, which competes with fluorescence, involves the rotation of one-half of the molecule with respect to the other with an efficiency that depends strongly on steric effects. The efficiency of fluorescence of most cyanine dyes has been shown to depend dramatically on their molecular environment within the biomolecule. For example, the fluorescence quantum yield of Cy3 linked covalently to DNA depends on the type of linkage used for attachment, DNA sequence and secondary structure. Cyanines linked to the DNA termini have been shown to be mostly stacked at the end of the helix, while cyanines linked to the DNA internally are believed to partially bind to the minor or major grooves. These interactions not only affect the photophysical properties of the probes but also create a large uncertainty in their orientation.",
"title": ""
}
] |
scidocsrr
|
badf17a8cc833f40d3bbaceef88d21f9
|
Dynamic spectrum access in cognitive radio networks with RF energy harvesting
|
[
{
"docid": "10187e22397b1c30b497943764d32c34",
"text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.",
"title": ""
}
] |
[
{
"docid": "be01fc6b7c89259c1aa06ccbfb6402c3",
"text": "Nowadays, automakers have invested in new technologies in order to improve the efficiency of their products. Giant automakers have taken an important step toward achieving this objective by designing continuously variable transmission systems (CVT) to continuously adapt the power of the engine with the external load according to the optimum efficiency curve of engine and reducing fuel consumption; beside, making smooth start up and removing the shock caused by changing the gear ratio and making more pleasurable driving. Considering the specifications of one of Iranian automaker products (the Saipa Pride 131), a CVT with a metal pushing belt and variable pulleys have been designed to replace its current manual transmission system. The necessary parts and components for the CVT have been determined and considering the necessary constraints, its mechanism and components have been designed.",
"title": ""
},
{
"docid": "1039532ef4dfbb7e0d04b25ad99682cb",
"text": "Communication of affect across a distance is not well supported by current technology, despite its importance to interpersonal interaction in modern lifestyles. Touch is a powerful conduit for emotional connectedness, and thus mediating haptic (touch) displays have been proposed to address this deficiency; but suitable evaluative methodology has been elusive. In this paper, we offer a first, structured examination of a design space for haptic support of remote affective communication, by analyzing the space and then comparing haptic models designed to manipulate its key dimensions. In our study, dyads (intimate pairs or strangers) are asked to communicate specified emotions using a purely haptic link that consists of virtual models rendered on simple knobs. These models instantiate both interaction metaphors of varying intimacy, and representations of virtual interpersonal distance. Our integrated objective and subjective observations imply that emotion can indeed be communicated through this medium, and confirm that the factors examined influence emotion communication performance as well as preference, comfort and connectedness. The proposed design space and the study results have implications for future efforts to support affective communication using the haptic modality, and the study approach comprises a first model for systematic evaluation of haptically expressed affect. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "21daaa29b6ff00af028f3f794b0f04b7",
"text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.",
"title": ""
},
{
"docid": "c7f465088265f34fe798bca8994e98fe",
"text": "Purpose – The purpose of this paper is to foster a common understanding of business process management (BPM) by proposing a set of ten principles that characterize BPM as a research domain and guide its successful use in organizational practice. Design/methodology/approach – The identification and discussion of the principles reflects our viewpoint, which was informed by extant literature and focus groups, including 20 BPM experts from academia and practice. Findings – We identify ten principles which represent a set of capabilities essential for mastering contemporary and future challenges in BPM. Their antonyms signify potential roadblocks and bad practices in BPM. We also identify a set of open research questions that can guide future BPM research. Research limitation/implication – Our findings suggest several areas of research regarding each of the identified principles of good BPM. Also, the principles themselves should be systematically and empirically examined in future studies. Practical implications – Our findings allow practitioners to comprehensively scope their BPM initiatives and provide a general guidance for BPM implementation. Moreover, the principles may also serve to tackle contemporary issues in other management areas. Originality/value – This is the first paper that distills principles of BPM in the sense of both good and bad practice recommendations. The value of the principles lies in providing normative advice to practitioners as well as in identifying open research areas for academia, thereby extending the reach and richness of BPM beyond its traditional frontiers.",
"title": ""
},
{
"docid": "bbfa632dc8e262fd30addc3ac97f1501",
"text": "Chemical Organization Theory (COT) is a recently developed formalism inspired by chemical reactions. Because of its simplicity, generality and power, COT seems able to tackle a wide variety of problems in the analysis of complex, self-organizing systems across multiple disciplines. The elements of the formalism are resources and reactions, where a reaction (which has the form a + b + ... → c + d +...) maps a combination of resources onto a new combination. The resources on the input side are “consumed” by the reaction, which “produces” the resources on the output side. Thus, a reaction represents an elementary process that transforms resources into new resources. Reaction networks tend to self-organize into invariant subnetworks, called “organizations”, which are attractors of their dynamics. These are characterized by closure (no new resources are added) and self-maintenance (no existing resources are lost). Thus, they provide a simple model of autopoiesis: the organization persistently recreates its own components. Organizations can be more or less resilient in the face of perturbations, depending on properties such as the size of their basin of attraction or the redundancy of their reaction pathways. Concrete applications of organizations can be found in autocatalytic cycles, metabolic or genetic regulatory networks, ecosystems, sustainable development, and social systems.",
"title": ""
},
{
"docid": "0d27f38d701e3ed5e4efcdb2f9043e44",
"text": "BACKGROUND\nThe mechanical, rheological, and pharmacological properties of hyaluronic acid (HA) gels differ by their proprietary crosslinking technologies.\n\n\nOBJECTIVE\nTo examine the different properties of a range of HA gels using simple and easily reproducible laboratory tests to better understand their suitability for particular indications.\n\n\nMETHODS AND MATERIALS\nHyaluronic acid gels produced by one of 7 different crosslinking technologies were subjected to tests for cohesivity, resistance to stretch, and microscopic examination. These 7 gels were: non-animal stabilized HA (NASHA® [Restylane®]), 3D Matrix (Surgiderm® 24 XP), cohesive polydensified matrix (CPM® [Belotero® Balance]), interpenetrating network-like (IPN-like [Stylage® M]), Vycross® (Juvéderm Volbella®), optimal balance technology (OBT® [Emervel Classic]), and resilient HA (RHA® [Teosyal Global Action]).\n\n\nRESULTS\nCohesivity varied for the 7 gels, with NASHA being the least cohesive and CPM the most cohesive. The remaining gels could be described as partially cohesive. The resistance to stretch test confirmed the cohesivity findings, with CPM having the greatest resistance. Light microscopy of the 7 gels revealed HA particles of varying size and distribution. CPM was the only gel to have no particles visible at a microscopic level.\n\n\nCONCLUSION\nHyaluronic acid gels are produced with a range of different crosslinking technologies. Simple laboratory tests show how these can influence a gel's behavior, and can help physicians select the optimal product for a specific treatment indication. Versions of this paper have been previously published in French and in Dutch in the Belgian journal Dermatologie Actualité. Micheels P, Sarazin D, Tran C, Salomon D. Un gel d'acide hyaluronique est-il semblable à son concurrent? Derm-Actu. 2015;14:38-43. J Drugs Dermatol. 2016;15(5):600-606..",
"title": ""
},
{
"docid": "586d89b6d45fd49f489f7fb40c87eb3a",
"text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.",
"title": ""
},
{
"docid": "e2f57214cd2ec7b109563d60d354a70f",
"text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .",
"title": ""
},
{
"docid": "6087e066b04b9c3ac874f3c58979f89a",
"text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.",
"title": ""
},
{
"docid": "8a9a4768f10e1d89280753db9bf298cc",
"text": "characteristics of method execution that the agent would want to maximize. So a higher quality method is better whereas a lower cost method is usually preferred. If each outcome oi has a quality distribution ((qi,1, pi,1), (qi,2, pi,2), ..., (qi,m, pi,m)),, the probability that it will execute with quality q is computed as [24] , , , ( ) : i i j i j i j P q p p q q The expected quality E(q) is computed as [24] :",
"title": ""
},
{
"docid": "76070cda75614ae4b1e3fe53703e7a43",
"text": "‘Emotion in Motion’ is an experiment designed to understand the emotional reaction of people to a variety of musical excerpts, via self-report questionnaires and the recording of electrodermal response (EDR) and pulse oximetry (HR) signals. The experiment ran for 3 months as part of a public exhibition, having nearly 4000 participants and over 12000 listening samples. This paper presents the methodology used by the authors to approach this research, as well as preliminary results derived from the self-report data and the physiology.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "e8d2fc861fd1b930e65d40f6ce763672",
"text": "Despite that burnout presents a serious burden for modern society, there are no diagnostic criteria. Additional difficulty is the differential diagnosis with depression. Consequently, there is a need to dispose of a burnout biomarker. Epigenetic studies suggest that DNA methylation is a possible mediator linking individual response to stress and psychopathology and could be considered as a potential biomarker of stress-related mental disorders. Thus, the aim of this review is to provide an overview of DNA methylation mechanisms in stress, burnout and depression. In addition to state-of-the-art overview, the goal of this review is to provide a scientific base for burnout biomarker research. We performed a systematic literature search and identified 25 pertinent articles. Among these, 15 focused on depression, 7 on chronic stress and only 3 on work stress/burnout. Three epigenome-wide studies were identified and the majority of studies used the candidate-gene approach, assessing 12 different genes. The glucocorticoid receptor gene (NR3C1) displayed different methylation patterns in chronic stress and depression. The serotonin transporter gene (SLC6A4) methylation was similarly affected in stress, depression and burnout. Work-related stress and depressive symptoms were associated with different methylation patterns of the brain derived neurotrophic factor gene (BDNF) in the same human sample. The tyrosine hydroxylase (TH) methylation was correlated with work stress in a single study. Additional, thoroughly designed longitudinal studies are necessary for revealing the cause-effect relationship of work stress, epigenetics and burnout, including its overlap with depression.",
"title": ""
},
{
"docid": "85cc307d55f4d1727e0194890051d34a",
"text": "Exploiting linguistic knowledge to infer properties of neologisms C. Paul Cook Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2010 Neologisms, or newly-coined words, pose problems for natural language processing (NLP) systems. Due to the recency of their coinage, neologisms are typically not listed in computational lexicons—dictionary-like resources that many NLP applications depend on. Therefore when a neologism is encountered in a text being processed, the performance of an NLP system will likely suffer due to the missing word-level information. Identifying and documenting the usage of neologisms is also a challenge in lexicography, the making of dictionaries. The traditional approach to these tasks has been to manually read a lot of text. However, due to the vast quantities of text being produced nowadays, particularly in electronic media such as blogs, it is no longer possible to manually analyze it all in search of neologisms. Methods for automatically identifying and inferring syntactic and semantic properties of neologisms would therefore address problems encountered in both natural language processing and lexicography. Because neologisms are typically infrequent due to their recent addition to the language, approaches to automatically learning word-level information relying on statistical distributional information are in many cases inappropriate. Moreover, neologisms occur in many domains and genres, and therefore approaches relying on domain-specific resources are also inappropriate. The hypothesis of this thesis is that knowledge about etymology—including word formation processes and types of semantic change—can be exploited for the acquisition of aspects of the syntax and semantics of neologisms. Evidence supporting this hypothesis is found",
"title": ""
},
{
"docid": "0d3dd8c380f7e9e0f9b7a1b1380ac36e",
"text": "This paper describes the design and testing process of low power current sensors using PCB rogowski-coil for high current application. The design and testing process of PCB rogowski-coil transducer and electronic circuit are explained in deeply analyze based on the physical structure. It also reveals the linearity and error rates of PCB rogowski-coil current sensors. Tests carried out using Current Generator with 5000A maximum output rate and the output of PCB rogowski-coil current sensor is 333mV following IEC-60044-8 standard. Finally, some measures are proposed for the performance improvement of PCB rogowski-coil current sensor to meet the requirements of protective relaying system in terms of structural design and testing standards.",
"title": ""
},
{
"docid": "3fe09244c12dc7ce92bdd0fd96380cec",
"text": "A novel switching dc-to-dc converter is presented, which has the same general conversion property (increase or decrease of the input dc voltage) as does the conventional buck-boost converter, and which offers through its new optimum topology higher efficiency, lower output voltage ripple, reduced EMI, smaller size and weight, and excellent dynamics response. One of its most significant advantages is that both input and output current are not pulsating but are continuous (essentially dc with small superimposed switching current ripple), this resulting in a close approximation to the ideal physically nonrealizable dc-to-dc transformer. The converter retains the simplest possible structure with the minimum number of components which, when interconnected in its optimum topology, yield the maximum performance. The new converter is extensively experimentally verified, and both the steady state (dc) and the dynamic (ac) theoretical model are correlated well with theexperimental data. both theoretical and experimental comparisons with the conventional buck-boost converter, to which an input filter has been added, demonstrate the significant advantages of the new optimum topology switching dc-to-dc converter.",
"title": ""
},
{
"docid": "16cbc21b3092a5ba0c978f0cf38710ab",
"text": "A major challenge to the problem of community question answering is the lexical and semantic gap between the sentence representations. Some solutions to minimize this gap includes the introduction of extra parameters to deep models or augmenting the external handcrafted features. In this paper, we propose a novel attentive recurrent tensor network for solving the lexical and semantic gap in community question answering. We introduce token-level and phrase-level attention strategy that maps input sequences to the output using trainable parameters. Further, we use the tensor parameters to introduce a 3-way interaction between question, answer and external features in vector space. We introduce simplified tensor matrices with L2 regularization that results in smooth optimization during training. The proposed model achieves state-of-the-art performance on the task of answer sentence selection (TrecQA and WikiQA datasets) while outperforming the current state-of-the-art on the tasks of best answer selection (Yahoo! L4) and answer triggering task (WikiQA).",
"title": ""
},
{
"docid": "13aeddf30926dc72c26453d7004f0a5c",
"text": "We would like to give robots the ability to secure human safety in human-robot collisions capable of arising in our living and working environments. However, unfortunately, not much attention has been paid to the technologies of human robot symbiosis to date because almost all robots have been designed and constructed on the assumption that the robots are physically separated from humans. A robot with a new concept will be required to deal with human-robot contact. In this article, we propose a passively movable human-friendly robot that consists of an elastic material-covered manipulator, passive compliant trunk, and passively movable base. The compliant trunk is equipped with springs and dampers, and the passively movable base is constrained by friction developed between the contact surface of the base and the ground. During unexpected collisions, the trunk and base passively move in response to the produced collision force. We describe the validity of the movable base and compliant trunk for collision force suppression, and it is demonstrated in several collision situations. KEY WORDS—passive viscoelastic trunk, passive base, collision force suppression, compliance ellipsoid, redundancy",
"title": ""
},
{
"docid": "23754e7c18cde633aeafface87c4a2c9",
"text": "Text classification is an important task in many text mining applications. Text data generated from the reviews have been growing tremendously. People are participating largely in internet to give their opinion about various subjects and topics. A branch of text mining that deals with people’s views about a subject is opinion mining, in which the data in the form of reviews is mined in order to analyze their sentiment. This study of people’s opinion is sentiment analysis and is a popular research area in text mining. In this paper, movie reviews are classified for sentiment analysis in weka. There are 2000 movie reviews in a dataset obtained from Cornell university dataset repository. The dataset is preprocessed and various filters have been applied to reduce the feature set. Feature selection methods are widely used for gathering most valuable words for each category in text mining processes. They help to find most distinctive words for each category by calculating some variables on data. The mostly employed methods are Chi-Square, Information Gain, and Gain Ratio. In this study, information gain method was employed because of its simplicity, less computational costs and its efficiency. The effects of reduced feature set have been proved to improve the performance of the classifier. Two popular classifiers namely naïve bayes and svm have been experimented with the movie review dataset. The results show that naïve bayes performs better than svm for classification of movie reviews.",
"title": ""
}
] |
scidocsrr
|
ea7cb387161353bf9b53750d2353f595
|
Automatic Behaviour-based Analysis and Classification System for Malware Detection
|
[
{
"docid": "f9f54cf8c057d2d9f9b559eb62a94e38",
"text": "The proliferation of malware has presented a serious threat to the security of computer systems. Traditional signature-based anti-virus systems fail to detect polymorphic/metamorphic and new, previously unseen malicious executables. Data mining methods such as Naive Bayes and Decision Tree have been studied on small collections of executables. In this paper, resting on the analysis of Windows APIs called by PE files, we develop the Intelligent Malware Detection System (IMDS) using Objective-Oriented Association (OOA) mining based classification. IMDS is an integrated system consisting of three major modules: PE parser, OOA rule generator, and rule based classifier. An OOA_Fast_FP-Growth algorithm is adapted to efficiently generate OOA rules for classification. A comprehensive experimental study on a large collection of PE files obtained from the anti-virus laboratory of KingSoft Corporation is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our IMDS system outperform popular anti-virus software such as Norton AntiVirus and McAfee VirusScan, as well as previous data mining based detection systems which employed Naive Bayes, Support Vector Machine (SVM) and Decision Tree techniques. Our system has already been incorporated into the scanning tool of KingSoft’s Anti-Virus software.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "b8b256ad48fcd7926c55e25ab5ef47be",
"text": "There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.",
"title": ""
}
] |
[
{
"docid": "fdfcab6236d74bcc882fde104f457d83",
"text": "In this study, direct and indirect effects of self-esteem, daily internet use and social media addiction to depression levels of adolescents have been investigated by testing a model. This descriptive study was conducted with 1130 students aged between 12 and 18 who are enrolled at different schools in southern region of Aegean. In order to collect data, “Children's Depression Inventory”, “Rosenberg Self-esteem Scale” and “Social Media Addiction Scale” have been used. In order to test the hypotheses Pearson's correlation and structural equation modeling were performed. The findings revealed that self-esteem and social media addiction predict %20 of the daily internet use. Furthermore, while depression was associated with self-esteem and daily internet use directly, social media addiction was affecting depression indirectly. Tested model was able to predict %28 of the depression among adolescents.",
"title": ""
},
{
"docid": "9a13a2baf55676f82457f47d3929a4e7",
"text": "Humans are a cultural species, and the study of human psychology benefits from attention to cultural influences. Cultural psychology's contributions to psychological science can largely be divided according to the two different stages of scientific inquiry. Stage 1 research seeks cultural differences and establishes the boundaries of psychological phenomena. Stage 2 research seeks underlying mechanisms of those cultural differences. The literatures regarding these two distinct stages are reviewed, and various methods for conducting Stage 2 research are discussed. The implications of culture-blind and multicultural psychologies for society and intergroup relations are also discussed.",
"title": ""
},
{
"docid": "c018a5cb5e89ee697f20d634ea360954",
"text": "A comprehensive approach to the design of a stripline for EMC testing is given in this paper. The authors attention has been focused on the design items that are most crucial by the achievement of satisfactory value of the VSWR and the impedance matching at the feeding ports in the extended frequency range from 80 MHz to 1000 GHz. For this purpose, the Vivaldi-structure and other advanced structures were considered. The theoretical approach based on numerical simulations lead to conclusions which have been applied by the physical design and also evaluated by experimental results.",
"title": ""
},
{
"docid": "bfcd6adc2df1cb6260696f9aeb4d4ea6",
"text": "The microtubule-dependent GEF-H1 pathway controls synaptic re-networking and overall gene expression via regulating cytoskeleton dynamics. Understanding this pathway after ischemia is essential to developing new therapies for neuronal function recovery. However, how the GEF-H1 pathway is regulated following transient cerebral ischemia remains unknown. This study employed a rat model of transient forebrain ischemia to investigate alterations of the GEF-H1 pathway using Western blotting, confocal and electron microscopy, dephosphorylation analysis, and pull-down assay. The GEF-H1 activity was significantly upregulated by: (i) dephosphorylation and (ii) translocation to synaptic membrane and nuclear structures during the early phase of reperfusion. GEF-H1 protein was then downregulated in the brain regions where neurons were destined to undergo delayed neuronal death, but markedly upregulated in neurons that were resistant to the same episode of cerebral ischemia. Consistently, GTP-RhoA, a GEF-H1 substrate, was significantly upregulated after brain ischemia. Electron microscopy further showed that neuronal microtubules were persistently depolymerized in the brain region where GEF-H1 protein was downregulated after brain ischemia. The results demonstrate that the GEF-H1 activity is significantly upregulated in both vulnerable and resistant brain regions in the early phase of reperfusion. However, GEF-H1 protein is downregulated in the vulnerable neurons but upregulated in the ischemic resistant neurons during the recovery phase after ischemia. The initial upregulation of GEF-H1 activity may contribute to excitotoxicity, whereas the late upregulation of GEF-H1 protein may promote neuroplasticity after brain ischemia.",
"title": ""
},
{
"docid": "40dc7de2a08c07183606235500df3c4f",
"text": "Aerial imagery of an urban environment is often characterized by significant occlusions, sharp edges, and textureless regions, leading to poor 3D reconstruction using conventional multi-view stereo methods. In this paper, we propose a novel approach to 3D reconstruction of urban areas from a set of uncalibrated aerial images. A very general structural prior is assumed that urban scenes consist mostly of planar surfaces oriented either in a horizontal or an arbitrary vertical orientation. In addition, most structural edges associated with such surfaces are also horizontal or vertical. These two assumptions provide powerful constraints on the underlying 3D geometry. The main contribution of this paper is to translate the two constraints on 3D structure into intra-image-column and inter-image-column constraints, respectively, and to formulate the dense reconstruction as a 2-pass Dynamic Programming problem, which is solved in complete parallel on a GPU. The result is an accurate cloud of 3D dense points of the underlying urban scene. Our algorithm completes the reconstruction of 1M points with 160 available discrete height levels in under a hundred seconds. Results on multiple datasets show that we are capable of preserving a high level of structural detail and visual quality.",
"title": ""
},
{
"docid": "a74a063dfe2be9fbf0769277785c7e53",
"text": "There has been considerable interest in improving the capability to identify communities within large collections of social networking data. However, many of the existing algorithms will compartment an actor (node) into a single group, ignoring the fact that in real-world situations people tend to belong concurrently to multiple groups. Our work focuses on the ability to find overlapping communities by aggregating the community perspectives of friendship groups, derived from egonets. We will demonstrate that our algorithm not only finds overlapping communities, but additionally helps identify key members, which bind communities together. Additionally, we will highlight the parallel feature of the algorithm as a means of improving runtime performance.",
"title": ""
},
{
"docid": "d274a98efb4568c5c320fc66cab56efd",
"text": "This paper presents the design and development of autonomous attitude stabilization, navigation in unstructured, GPS-denied environments, aggressive landing on inclined surfaces, and aerial gripping using onboard sensors on a low-cost, custom-built quadrotor. The development of a multi-functional micro air vehicle (MAV) that utilizes inexpensive off-the-shelf components presents multiple challenges due to noise and sensor accuracy, and there are control challenges involved with achieving various capabilities beyond navigation. This paper addresses these issues by developing a complete system from the ground up, addressing the attitude stabilization problem using extensive filtering and an attitude estimation filter recently developed in the literature. Navigation in both indoor and outdoor environments is achieved using a visual Simultaneous Localization and Mapping (SLAM) algorithm that relies on an onboard monocular camera. The system utilizes nested controllers for attitude stabilization, vision-based navigation, and guidance, with the navigation controller implemented using a This research was supported by the National Science Foundation under CAREER Award ECCS-0748287. Electronic supplementary material The online version of this article (doi:10.1007/s10514-012-9286-z) contains supplementary material, which is available to authorized users. V. Ghadiok ( ) · W. Ren Department of Electrical Engineering, University of California, Riverside, Riverside, CA 92521, USA e-mail: vaibhav.ghadiok@ieee.org W. Ren e-mail: ren@ee.ucr.edu J. Goldin Electronic Systems Center, Hanscom Air Force Base, Bedford, MA 01731, USA e-mail: jeremy.goldin@us.af.mil nonlinear controller based on the sigmoid function. The efficacy of the approach is demonstrated by maintaining a stable hover even in the presence of wind gusts and when manually hitting and pulling on the quadrotor. Precision landing on inclined surfaces is demonstrated as an example of an aggressive maneuver, and is performed using only onboard sensing. Aerial gripping is accomplished with the addition of a secondary camera, capable of detecting infrared light sources, which is used to estimate the 3D location of an object, while an under-actuated and passively compliant manipulator is designed for effective gripping under uncertainty. The quadrotor is therefore able to autonomously navigate inside and outside, in the presence of disturbances, and perform tasks such as aggressively landing on inclined surfaces and locating and grasping an object, using only inexpensive, onboard sensors.",
"title": ""
},
{
"docid": "f811ec2ab6ce7e279e97241dc65de2a5",
"text": "Summary Kraljic's purchasing portfolio approach has inspired many academic writers to undertake further research into purchasing portfolio models. Although it is evident that power and dependence issues play an important role in the Kraljic matrix, scant quantitative research has been undertaken in this respect. In our study we have filled this gap by proposing quantitative measures for ‘relative power’ and ‘total interdependence’. By undertaking a comprehensive survey among Dutch purchasing professionals, we have empirically quantified ‘relative power’ and ‘total interdependence’ for each quadrant of the Kraljic portfolio matrix. We have compared theoretical expectations on power and dependence levels with our empirical findings. A remarkable finding is the observed supplier dominance in the strategic quadrant of the Kraljic matrix. This indicates that the supplier dominates even satisfactory partnerships. In the light of this finding future research cannot assume any longer that buyersupplier relationships in the strategic quadrant of the Kraljic matrix are necessarily characterised by symmetric power. 1 Marjolein C.J. Caniëls, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW), P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762724; Fax: +31 45 5762103 E-mail: marjolein.caniels@ou.nl 2 Cees J. Gelderman, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW) P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762590; Fax: +31 45 5762103 E-mail: kees.gelderman@ou.nl",
"title": ""
},
{
"docid": "b4e56855d6f41c5829b441a7d2765276",
"text": "College student attendance management of class plays an important position in the work of management of college student, this can help to urge student to class on time, improve learning efficiency, increase learning grade, and thus entirely improve the education level of the school. Therefore, colleges need an information system platform of check attendance management of class strongly to enhance check attendance management of class using the information technology which gathers the basic information of student automatically. According to current reality and specific needs of check attendance and management system of college students and the exist device of the system. Combined with the study of college attendance system, this paper gave the node design of check attendance system of class which based on RFID on the basic of characteristics of embedded ARM and RFID technology.",
"title": ""
},
{
"docid": "a6cf86ffa90c74b7d7d3254c7d33685a",
"text": "Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semistructured data as graphs where nodes correspond to primitives (parts, interest points, and segments) and edges characterize the relationships between these primitives. However, these nonvectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of--explicit/implicit--graph vectorization and embedding. This embedding process should be resilient to intraclass graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have a positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.",
"title": ""
},
{
"docid": "b31723195f18a128e2de04918808601d",
"text": "Realistic secure processors, including those built for academic and commercial purposes, commonly realize an “attested execution” abstraction. Despite being the de facto standard for modern secure processors, the “attested execution” abstraction has not received adequate formal treatment. We provide formal abstractions for “attested execution” secure processors and rigorously explore its expressive power. Our explorations show both the expected and the surprising. On one hand, we show that just like the common belief, attested execution is extremely powerful, and allows one to realize powerful cryptographic abstractions such as stateful obfuscation whose existence is otherwise impossible even when assuming virtual blackbox obfuscation and stateless hardware tokens. On the other hand, we show that surprisingly, realizing composable two-party computation with attested execution processors is not as straightforward as one might anticipate. Specifically, only when both parties are equipped with a secure processor can we realize composable two-party computation. If one of the parties does not have a secure processor, we show that composable two-party computation is impossible. In practice, however, it would be desirable to allow multiple legacy clients (without secure processors) to leverage a server’s secure processor to perform a multi-party computation task. We show how to introduce minimal additional setup assumptions to enable this. Finally, we show that fair multi-party computation for general functionalities is impossible if secure processors do not have trusted clocks. When secure processors have trusted clocks, we can realize fair two-party computation if both parties are equipped with a secure processor; but if only one party has a secure processor (with a trusted clock), then fairness is still impossible for general functionalities.",
"title": ""
},
{
"docid": "27a159a3980f753265d5ced9e98e7aef",
"text": "This research paper aims to analyze the credit risk involved in peer-to-peer (P2P) lending system of “LendingClub” Company. The P2P system allows investors to get significantly higher return on investment as compared to bank deposit, but it comes with a risk of the loan and interest not being repaid. Ensemble machine learning algorithms and preprocessing techniques are used to explore, analyze and determine the factors which play crucial role in predicting the credit risk involved in “LendingClub” publicly available 2013-2015 loan applications dataset. A loan is considered “good” if it's repaid with interest and on time. The algorithms are optimized to favor the potential good loans whilst identifying defaults or risky credits.",
"title": ""
},
{
"docid": "5ac8759c0c1453ee60a0f3b6b228cf7f",
"text": "Combining learning with vision techniques in interactive image retrieval has been an active research topic during the past few years. However, existing learning techniques either are based on heuristics or fail to analyze the working conditions. Furthermore, there is almost no in depth study on how to effectively learn from the users when there are multiple visual features in the retrieval system. To address these limitations, in this paper, we present a vigorous optimization formulation of the learning process and solve the problem in a principled way. By using Lagrange multipliers, we have derived explicit solutions, which are both optimal and fast to compute. Extensive comparisons against state-ofthe-art techniques have been performed. Experiments were carried out on a large-size heterogeneous image collection consisting of 17,000 images. Retrieval performance was tested under a wide range of conditions. Various evaluation criteria, including precision-recall curve and rank measure, have demonstrated the effectiveness and robustness of the proposed technique.",
"title": ""
},
{
"docid": "53ab91cdff51925141c43c4bc1c6aade",
"text": "Floods are the most common natural disasters, and cause significant damage to life, agriculture and economy. Research has moved on from mathematical modeling or physical parameter based flood forecasting schemes, to methodologies focused around algorithmic approaches. The Internet of Things (IoT) is a field of applied electronics and computer science where a system of devices collects data in real time and transfers it through a Wireless Sensor Network (WSN) to the computing device for analysis. IoT generally combines embedded system hardware techniques along with data science or machine learning models. In this work, an IoT and machine learning based embedded system is proposed to predict the probability of floods in a river basin. The model uses a modified mesh network connection over ZigBee for the WSN to collect data, and a GPRS module to send the data over the internet. The data sets are evaluated using an artificial neural network model. The results of the analysis which are also appended show a considerable improvement over the currently existing methods.",
"title": ""
},
{
"docid": "0a94a995f91afd641013b97dcec7da2a",
"text": "Two competing encoding concepts are known to scale well with growing amounts of XML data: XPath Accelerator encoding implemented by MonetDB for in-memory documents and X-Hive’s Persistent DOM for on-disk storage. We identified two ways to improve XPath Accelerator and present prototypes for the respective techniques: BaseX boosts inmemory performance with optimized data and value index structures while Idefix introduces native block-oriented persistence with logarithmic update behavior for true scalability, overcoming main-memory constraints. An easy-to-use Java-based benchmarking framework was developed and used to consistently compare these competing techniques and perform scalability measurements. The established XMark benchmark was applied to all four systems under test. Additional fulltext-sensitive queries against the well-known DBLP database complement the XMark results. Not only did the latest version of X-Hive finally surprise with good scalability and performance numbers. Also, both BaseX and Idefix hold their promise to push XPath Accelerator to its limits: BaseX efficiently exploits available main memory to speedup XML queries while Idefix surpasses main-memory constraints and rivals the on-disk leadership of X-Hive. The competition between XPath Accelerator and Persistent DOM definitely is relaunched.",
"title": ""
},
{
"docid": "d3c9699657fa11010a05181c08e33544",
"text": "The Internet-of-Things (IoT) is an emerging concept of network connectivity anytime and anywhere for billions of everyday objects, which has recently attracted tremendous attention from both the industry and academia. The rapid growth of IoT has been driven by recent advancements in consumer electronics, wireless network densification, 5G communication technologies, and cloud-computing enabled bigdata analytics. One of the key challenges for IoT is the limited network lifetime due to massive IoT devices being powered by batteries with finite capacities. The low-power and low-complexity backscatter communications (BackCom), which simply relies on passive reflecting and modulation an incident radiofrequency (RF) wave, has emerged to be a promising technology for tackling this challenge. However, the contemporary BackCom has several major limitations, such as short transmission range, low data rate and uni-directional information transmission. In this article, we present an overview of the next generation BackCom by discussing basic principles, system and network architectures and relevant techniques. Lastly, we describe the IoT application scenarios with the next generation BackCom.",
"title": ""
},
{
"docid": "7d8c8460254eb20f548957037c9e96c9",
"text": "The red palm weevil (RPW) Rhynchophorus ferrugineus Olivier (Coleoptera: Curculionidae) is one of the major pests of palms. The larvae bore into the palm trunk and feed on the palm tender tissues and sap, leading the host tree to death. The gut microbiota of insects plays a remarkable role in the host life and understanding the relationship dynamics between insects and their microbiota may improve the biological control of insect pests. The purpose of this study was to analyse the diversity of the gut microbiota of field-caught RPW larvae sampled in Sicily (Italy). The 16S rRNA gene-based Temporal Thermal Gradient Gel Electrophoresis (TTGE) of the gut microbiota of RPW field-trapped larvae revealed low bacterial diversity and stability of the community over seasons and among pools of larvae from different host trees. Pyrosequencing of the 16S rRNA gene V3 region confirmed low complexity and assigned 98% of the 75,564 reads to only three phyla: Proteobacteria (64.7%) Bacteroidetes (23.6%) and Firmicutes (9.6%) and three main families [Enterobacteriaceae (61.5%), Porphyromonadaceae (22.1%) and Streptococcaceae (8.9%)]. More than half of the reads could be classified at the genus level and eight bacterial genera were detected in the larval RPW gut at an abundance ≥1%: Dysgonomonas (21.8%), Lactococcus (8.9%), Salmonella (6.8%), Enterobacter (3.8%), Budvicia (2.8%), Entomoplasma (1.4%), Bacteroides (1.3%) and Comamonas (1%). High abundance of Enterobacteriaceae was also detected by culturing under aerobic conditions. Unexpectedly, acetic acid bacteria (AAB), that are known to establish symbiotic associations with insects relying on sugar-based diets, were not detected. The RPW gut microbiota is composed mainly of facultative and obligate anaerobic bacteria with a fermentative metabolism. These bacteria are supposedly responsible for palm tissue fermentation in the tunnels where RPW larvae thrive and might have a key role in the insect nutrition, and other functions that need to be investigated.",
"title": ""
},
{
"docid": "b38939ec3c6f8e10553f934ceab401ff",
"text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.",
"title": ""
},
{
"docid": "62cc85ab7517797f50ce5026fbc5617a",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
},
{
"docid": "b294fb1586b2281fb7cb80c7268370ce",
"text": "Most experiments on conformity have been conducted in relation to judgments of physical reality; surprisingly few papers have experimentally examined the influence of group norms on social issues with a moral component. In response to this, participants were told that they were either in a minority or in a majority relative to their university group in terms of their attitudes toward recognition of gay couples in law (Expt 1: N = 205) and a government apology to Aborigines (Expt 2: N = 110). In both experiments, it was found that participants who had a weak moral basis for their attitude conformed to the group norm on private behaviours. In contrast, those who had a strong moral basis for their attitude showed non-conformity on private behaviours and counter-conformity on public behaviours. Incidences of non-conformity and counter-conformity are discussed with reference to theory and research on normative influence.",
"title": ""
}
] |
scidocsrr
|
294a644fdad5c5c3b3a2238b5b5fbd7b
|
Networked Participatory Scholarship: Emergent techno-cultural pressures toward open and digital scholarship in online networks
|
[
{
"docid": "86ecf68fcd67913086df2122ad99c763",
"text": "Behaviorism, cognitivism, and constructivism are the three broad learning theories most often utilized in the creation of instructional environments. These theories, however, were developed in a time when learning was not impacted through technology. Over the last twenty years, technology has reorganized how we live, how we communicate, and how we learn. Learning needs and theories that describe learning principles and processes, should be reflective of underlying social environments. Vaill emphasizes that “learning must be a way of being – an ongoing set of attitudes and actions by individuals and groups that they employ to try to keep abreast o the surprising, novel, messy, obtrusive, recurring events...” (1996, p.42).",
"title": ""
}
] |
[
{
"docid": "8c308305b4a04934126c4746c8333b52",
"text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.",
"title": ""
},
{
"docid": "ba4ffbb6c3dc865f803cbe31b52919c5",
"text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.",
"title": ""
},
{
"docid": "90fa2211106f4a8e23c5a9c782f1790e",
"text": "Page layout is dominant in many genres of physical documents, but it is frequently overlooked when texts are digitised. Its presence is largely determined by available technologies and skills: If no provision is made for creating, preserving, or describing layout, then it tends not to be created, preserved or described. However, I argue, the significance and utility of layout for readers is such that it will survive or re-emerge. I review how layout has been treated in the literature of graphic design and linguistics, and consider its role as a memory tool. I distinguish between fixed, flowed, fugitive and fragmented pages, determined not only by authorial intent but also by technical constraints. Finally, I describe graphic literacy as a component of functional literacy and suggest that corresponding graphic literacies are needed not only by readers, but by creators of documents and by the information management technologies that produce, deliver, and store them.",
"title": ""
},
{
"docid": "e2fc186c227910de013a7456bc6d800d",
"text": "The study was carried out to investigate the acute and sublethal toxicity of Moringa oleifera seed extract on hematological and biochemical variables of a freshwater fish Cyprinus carpio under laboratory conditions. The 96 h LC50 value of M. oleifera seed extract to the fish C. carpio was estimated by probit analysis method and was found to be 124.0 mg/L (with 95% confidence limits). For sublethal studies a non lethal dose of 1/10th of 96 h LC50 value (12.40 mg/L) was taken. During acute treatment (96 h), hematological variables like red blood cell count (RBC), hemoglobin (Hb), hematocrit (Hct), and mean corpuscular hemoglobin concentration (MCHC) were significantly (P<0.05) decreased in fish exposed to seed extract. However a significant (P<0.05) increase in white blood cell count (WBC), mean corpuscular volume (MCV) and mean corpuscular hemoglobin (MCH) value was observed in the exposed fish during above treatment period when compared to that of the control groups. Biochemical parameters such as plasma protein and glucose levels significantly declined in fish exposed to seed extract while a significant (P<0.05) increase in aspartate aminotransferase (AST), alanine aminotransferase (ALT) and alkaline phosphatase (ALP) activity was observed. During sublethal treatment (12.40 mg/L), WBC count, MCV, MCH, plasma glucose, AST, ALT and ALP activities were gradually elevated (P<0.05) at the end of 7, 14, 21, 28 and 35th days in seed extract exposed fish, whereas plasma protein level declined. However, a biphasic trend was noticed in Hb, Hct, RBC and MCHC levels. This study may provide baseline information about the toxicity of M. oleifera seed extract to C. carpio and to establish safer limit in water purification.",
"title": ""
},
{
"docid": "cdb937def5a92e3843a761f57278783e",
"text": "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.",
"title": ""
},
{
"docid": "eba5ef77b594703c96c0e2911fcce7b0",
"text": "Deep Neural Network Hidden Markov Models, or DNN-HMMs, are recently very promising acoustic models achieving good speech recognition results over Gaussian mixture model based HMMs (GMM-HMMs). In this paper, for emotion recognition from speech, we investigate DNN-HMMs with restricted Boltzmann Machine (RBM) based unsupervised pre-training, and DNN-HMMs with discriminative pre-training. Emotion recognition experiments are carried out on these two models on the eNTERFACE'05 database and Berlin database, respectively, and results are compared with those from the GMM-HMMs, the shallow-NN-HMMs with two layers, as well as the Multi-layer Perceptrons HMMs (MLP-HMMs). Experimental results show that when the numbers of the hidden layers as well hidden units are properly set, the DNN could extend the labeling ability of GMM-HMM. Among all the models, the DNN-HMMs with discriminative pre-training obtain the best results. For example, for the eNTERFACE'05 database, the recognition accuracy improves 12.22% from the DNN-HMMs with unsupervised pre-training, 11.67% from the GMM-HMMs, 10.56% from the MLP-HMMs, and even 17.22% from the shallow-NN-HMMs, respectively.",
"title": ""
},
{
"docid": "33a1450fa00705d5ef20780b4e1de6b3",
"text": "This paper reviews the range of sensors used in electronic nose (e-nose) systems to date. It outlines the operating principles and fabrication methods of each sensor type as well as the applications in which the different sensors have been utilised. It also outlines the advantages and disadvantages of each sensor for application in a cost-effective low-power handheld e-nose system.",
"title": ""
},
{
"docid": "332bcd9b49f3551d8f07e4f21a881804",
"text": "Attention plays a critical role in effective learning. By means of attention assessment, it helps learners improve and review their learning processes, and even discover Attention Deficit Hyperactivity Disorder (ADHD). Hence, this work employs modified smart glasses which have an inward facing camera for eye tracking, and an inertial measurement unit for head pose estimation. The proposed attention estimation system consists of eye movement detection, head pose estimation, and machine learning. In eye movement detection, the central point of the iris is found by the locally maximum curve via the Hough transform where the region of interest is derived by the identified left and right eye corners. The head pose estimation is based on the captured inertial data to generate physical features for machine learning. Here, the machine learning adopts Genetic Algorithm (GA)-Support Vector Machine (SVM) where the feature selection of Sequential Floating Forward Selection (SFFS) is employed to determine adequate features, and GA is to optimize the parameters of SVM. Our experiments reveal that the proposed attention estimation system can achieve the accuracy of 93.1% which is fairly good as compared to the conventional systems. Therefore, the proposed system embedded in smart glasses brings users mobile, convenient, and comfortable to assess their attention on learning or medical symptom checker.",
"title": ""
},
{
"docid": "57dfc6f8b462512a3a2328f897ea44a6",
"text": "We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.",
"title": ""
},
{
"docid": "26d2c79da3baaba9e1e29e8a08136c85",
"text": "Density operators allow for representing ambiguity about a vector representation, both in quantum theory and in distributional natural language meaning. For mally equivalently, they allow for discarding part of the description of a composite system, where we co nsider the discarded part to be the context. We introduce dual density operators, which allow f r two independent notions of context. We demonstrate the use of dual density operators within a gra mmatical-compositional distributional framework for natural language meaning. We show that dual de nsity operators can be used to simultaneously represent: (i) ambiguity about word meanings (e. g. queen as a person vs. queen as a band), and (ii) lexical entailment (e.g. tiger ⇒ mammal). We provide a proof-of-concept example.",
"title": ""
},
{
"docid": "a88809760ba85afd558d4dd076a4dec8",
"text": "Traditional web search engines treat queries as sequences of keywords and return web pages that contain those keywords as results. Such a mechanism is effective when the user knows exactly the right words that web pages use to describe the content they are looking for. However, it is less than satisfactory or even downright hopeless if the user asks for a concept or topic that has broader and sometimes ambiguous meanings. This is because keyword-based search engines index web pages by keywords and not by concepts or topics. In fact they do not understand the content of the web pages. In this paper, we present a framework that improves web search experiences through the use of a probabilistic knowledge base. The framework classifies web queries into different patterns according to the concepts and entities in addition to keywords contained in these queries. Then it produces answers by interpreting the queries with the help of the knowledge base. Our preliminary results showed that the new framework is capable of answering various types of topic-like queries with much higher user satisfaction, and is therefore a valuable addition to the traditional web search.",
"title": ""
},
{
"docid": "1d5f07817f8c10226718e1fb20782dc4",
"text": "Myelomeningocele (MMC) is the most frequent congenital defect of the central nervous system for which there is no satisfactory alternative to postnatal treatment. On the contrary prenatal MMC surgery is conducting before birth and is aimed at protecting from Chiari II malformation. The main goal of fetal MMC repair is to improve development and life quality of children with Chiari II malformation. Management of Myelomeningocele Study (MOMS) which was published in 2011 clearly confirmed effectiveness of prenatal surgery. In this paper we compare MOMS results with our own clinical experience. Thanks to high effectiveness and significant improvement in safety of maternal-fetal surgery prenatal MMC surgery become a new standard of treatment.",
"title": ""
},
{
"docid": "40f8240220dad82a7a2da33932fb0e73",
"text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.",
"title": ""
},
{
"docid": "658f2d045fe005ee1a4016b2de0ae1b1",
"text": "Given a partial description like “she opened the hood of the car,” humans can reason about the situation and anticipate what might come next (“then, she examined the engine”). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning. We present Swag, a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations. To address the recurring challenges of the annotation artifacts and human biases found in many existing datasets, we propose Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. To account for the aggressive adversarial filtering, we use state-of-theart language models to massively oversample a diverse set of potential counterfactuals. Empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy (88%), various competitive models struggle on our task. We provide comprehensive analysis that indicates significant opportunities for future research.",
"title": ""
},
{
"docid": "6f05f55b7003616deb06f27f84e2cc61",
"text": "Automatic detection and segmentation of brain tumors in 3D MR neuroimages can significantly aid early diagnosis, surgical planning, and follow-up assessment. However, due to diverse location and varying size, primary and metastatic tumors present substantial challenges for detection. We present a fully automatic, unsupervised algorithm that can detect single and multiple tumors from 3 to 28,079 mm3 in volume. Using 20 clinical 3D MR scans containing from 1 to 15 tumors per scan, the proposed approach achieves between 87.84% and 95.30% detection rate and an average end-to-end running time of under 3 minutes. In addition, 5 normal clinical 3D MR scans are evaluated quantitatively to demonstrate that the approach has the potential to discriminate between abnormal and normal brains.",
"title": ""
},
{
"docid": "34084a12a4437c3d2126b06ffbf8c734",
"text": "OBJECTIVE\nThe psychopathy checklist-revised (PCL-R; Hare, 1991, 2003) is often used to assess risk of violence, perhaps based on the assumption that it captures emotionally detached individuals who are driven to prey upon others. This study is designed to assess the relation between (a) core interpersonal and affective traits of psychopathy and impulsive antisociality on the one hand and (b) the risk of future violence and patterns of motivation for past violence on the other.\n\n\nMETHOD\nA research team reliably assessed a sample of 158 male offenders for psychopathy, using both the interview-based PCL-R and the self-report psychopathic personality inventory (PPI: Lilienfeld & Andrews, 1996). Then, a second independent research team assessed offenders' lifetime patterns of violence and their motivation. After these baseline assessments, offenders were followed in prison or the community for up to 1 year to assess their involvement in 3 different forms of violence. Baseline and follow-up assessments included both interviews and reviews of official records.\n\n\nRESULTS\nFirst, the PPI manifested incremental validity in predicting future violence over the PCL-R (but not vice versa)-and most of its predictive power derived solely from impulsive antisociality. Second, impulsive antisociality-not interpersonal and affective traits specific to psychopathy-were uniquely associated with instrumental lifetime patterns of past violence. The latter psychopathic traits are narrowly associated with deficits in motivation for violence (e.g., lack of fear or lack of provocation).\n\n\nCONCLUSIONS\nThese findings and their consistency with some past research led us to advise against making broad generalizations about the relation between psychopathy and violence.",
"title": ""
},
{
"docid": "ff8cc7166b887990daa6ef355695e54f",
"text": "The knowledge-based theory of the firm suggests that knowledge is the organizational asset that enables sustainable competitive advantage in hypercompetitive environments. The emphasis on knowledge in today’s organizations is based on the assumption that barriers to the transfer and replication of knowledge endow it with strategic importance. Many organizations are developing information systems designed specifically to facilitate the sharing and integration of knowledge. Such systems are referred to as Knowledge Management System (KMS). Because KMS are just beginning to appear in organizations, little research and field data exists to guide the development and implementation of such systems or to guide expectations of the potential benefits of such systems. This study provides an analysis of current practices and outcomes of KMS and the nature of KMS as they are evolving in fifty organizations. The findings suggest that interest in KMS across a variety of industries is very high, the technological foundations are varied, and the major",
"title": ""
},
{
"docid": "4142b1fc9e37ffadc6950105c3d99749",
"text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)",
"title": ""
},
{
"docid": "7df97d3a5c393053b22255a0414e574a",
"text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.",
"title": ""
},
{
"docid": "d5146275a18b9ffcdcd27c07c07ef27a",
"text": "Pathogens and parasites can manipulate their hosts to optimize their own fitness. For instance, bacterial pathogens have been shown to affect their host plants’ volatile and non-volatile metabolites, which results in increased attraction of insect vectors to the plant, and, hence, to increased pathogen dispersal. Behavioral manipulation by parasites has also been shown for mice, snails and zebrafish as well as for insects. Here we show that infection by pathogenic bacteria alters the social communication system of Drosophila melanogaster. More specifically, infected flies and their frass emit dramatically increased amounts of fly odors, including the aggregation pheromones methyl laurate, methyl myristate, and methyl palmitate, attracting healthy flies, which in turn become infected and further enhance pathogen dispersal. Thus, olfactory cues for attraction and aggregation are vulnerable to pathogenic manipulation, and we show that the alteration of social pheromones can be beneficial to the microbe while detrimental to the insect host. Behavioral manipulation of host by pathogens has been observed in vertebrates, invertebrates, and plants. Here the authors show that in Drosophila, infection with pathogenic bacteria leads to increased pheromone release, which attracts healthy flies. This process benefits the pathogen since it enhances bacterial dispersal, but is detrimental to the host.",
"title": ""
}
] |
scidocsrr
|
874e3ee8d4ff284ba979d73d2351c23a
|
Wireless Sensor Networks: a Survey on Environmental Monitoring
|
[
{
"docid": "ec06587bff3d5c768ab9083bd480a875",
"text": "Wireless sensor networks are an emerging technology for low-cost, unattended monitoring of a wide range of environments, and their importance has been enforced by the recent delivery of the IEEE 802.15.4 standard for the physical and MAC layers and the forthcoming Zigbee standard for the network and application layers. The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.",
"title": ""
}
] |
[
{
"docid": "b0eb2048209c7ceeb3c67c2b24693745",
"text": "Modeling an ontology is a hard and time-consuming task. Although methodologies are useful for ontologists to create good ontologies, they do not help with the task of evaluating the quality of the ontology to be reused. For these reasons, it is imperative to evaluate the quality of the ontology after constructing it or before reusing it. Few studies usually present only a set of criteria and questions, but no guidelines to evaluate the ontology. The effort to evaluate an ontology is very high as there is a huge dependence on the evaluator’s expertise to understand the criteria and questions in depth. Moreover, the evaluation is still very subjective. This study presents a novel methodology for ontology evaluation, taking into account three fundamental principles: i) it is based on the Goal, Question, Metric approach for empirical evaluation; ii) the goals of the methodologies are based on the roles of knowledge representations combined with specific evaluation criteria; iii) each ontology is evaluated according to the type of ontology. The methodology was empirically evaluated using different ontologists and ontologies of the same domain. The main contributions of this study are: i) defining a step-by-step approach to evaluate the quality of an ontology; ii) proposing an evaluation based on the roles of knowledge representations; iii) the explicit difference of the evaluation according to the type of the ontology iii) a questionnaire to evaluate the ontologies; iv) a statistical model that automatically calculates the quality of the ontologies.",
"title": ""
},
{
"docid": "9404d1fd58dbd1d83c2d503e54ffd040",
"text": "This work examines the association between the Big Five personality dimensions, the most relevant demographic factors (sex, age and relationship status), and subjective well-being. A total of 236 nursing professionals completed the NEO Five Factor Inventory (NEO-FFI) and the Affect-Balance Scale (ABS). Regression analysis showed personality as one of the most important correlates of subjective well-being, especially through Extraversion and Neuroticism. There was a positive association between Openness to experience and the positive and negative components of affect. Likewise, the most basic demographic variables (sex, age and relationship status) are found to be differentially associated with the different elements of subjective well-being, and the explanation for these associations is highly likely to be found in the links between demographic variables and personality. In the same way as control of the effect of demographic variables is necessary for isolating the effect of personality on subjective well-being, control of personality should permit more accurate analysis of the role of demographic variables in relation to the subjective well-being construct. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "683ca94061450b83292ffca3fffc66d7",
"text": "A sensorless internal temperature monitoring method for induction motors is proposed in this paper. This method can be embedded in standard motor drives, and is based on the stator windings resistance variation with temperature. A small AC signal is injected to the motor, superimposed to the power supply current, in order to measure the stator resistance online. The proposed method has the advantage of requiring a very low-level monitoring signal, hence the motor torque perturbations and additional power losses are negligible. Furthermore, temperature estimations do not depend on the knowledge of any other motor parameter, since the method is not based on a model. This makes the proposed method more robust than model-based methods. Experimental results that validate the proposal are also presented.",
"title": ""
},
{
"docid": "5495aeaa072a1f8f696298ebc7432045",
"text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.",
"title": ""
},
{
"docid": "460aa0df99a3e88a752d5f657f1565de",
"text": "Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.",
"title": ""
},
{
"docid": "4d136b60209ef625c09a15e3e5abb7f7",
"text": "Alterations in the bidirectional interactions between the intestine and the nervous system have important roles in the pathogenesis of irritable bowel syndrome (IBS). A body of largely preclinical evidence suggests that the gut microbiota can modulate these interactions. A small and poorly defined role for dysbiosis in the development of IBS symptoms has been established through characterization of altered intestinal microbiota in IBS patients and reported improvement of subjective symptoms after its manipulation with prebiotics, probiotics, or antibiotics. It remains to be determined whether IBS symptoms are caused by alterations in brain signaling from the intestine to the microbiota or primary disruption of the microbiota, and whether they are involved in altered interactions between the brain and intestine during development. We review the potential mechanisms involved in the pathogenesis of IBS in different groups of patients. Studies are needed to better characterize alterations to the intestinal microbiome in large cohorts of well-phenotyped patients, and to correlate intestinal metabolites with specific abnormalities in gut-brain interactions.",
"title": ""
},
{
"docid": "aec82326c1fea34da9935731e4c476f4",
"text": "This paper presents a trajectory tracking control design which provides the essential spatial-temporal feedback control capability for fixed-wing unmanned aerial vehicles (UAVs) to execute a time critical mission reliably. In this design, a kinematic trajectory tracking control law and a control gain selection method are developed to allow the control law to be implemented on a fixed-wing UAV based on the platform's dynamic capability. The tracking control design assumes the command references of the heading and airspeed control systems are the accessible control inputs, and it does not impose restrictive model assumptions on the UAV's control systems. The control design is validated using a high-fidelity nonlinear six degrees of freedom (6DOF) model and the reported results suggest that the proposed tracking control design is able to track time-parameterized trajectories stably with robust control performance.",
"title": ""
},
{
"docid": "4b432e49485b57ddb1921478f2917d4b",
"text": "Dynamic perturbations of reaching movements are an important technique for studying motor learning and adaptation. Adaptation to non-contacting, velocity-dependent inertial Coriolis forces generated by arm movements during passive body rotation is very rapid, and when complete the Coriolis forces are no longer sensed. Adaptation to velocity-dependent forces delivered by a robotic manipulandum takes longer and the perturbations continue to be perceived even when adaptation is complete. These differences reflect adaptive self-calibration of motor control versus learning the behavior of an external object or 'tool'. Velocity-dependent inertial Coriolis forces also arise in everyday behavior during voluntary turn and reach movements but because of anticipatory feedforward motor compensations do not affect movement accuracy despite being larger than the velocity-dependent forces typically used in experimental studies. Progress has been made in understanding: the common features that determine adaptive responses to velocity-dependent perturbations of jaw and limb movements; the transfer of adaptation to mechanical perturbations across different contact sites on a limb; and the parcellation and separate representation of the static and dynamic components of multiforce perturbations.",
"title": ""
},
{
"docid": "a48915859a7d772ee8515cb106c79ec1",
"text": "Mathematical modelling is increasingly used to get insights into the functioning of complex biological networks. In this context, Petri nets (PNs) have recently emerged as a promising tool among the various methods employed for the modelling and analysis of molecular networks. PNs come with a series of extensions, which allow different abstraction levels, from purely qualitative to more complex quantitative models. Noteworthily, each of these models preserves the underlying graph, which depicts the interactions between the biological components. This article intends to present the basics of the approach and to foster the potential role PNs could play in the development of the computational systems biology.",
"title": ""
},
{
"docid": "1925162dafab9fb0522f625782b7e7a3",
"text": "Breast cancer is the most frequently diagnosed malignancy and the second leading cause of mortality in women . In the last decade, ultrasound along with digital mammography has come to be regarded as the gold standard for breast cancer diagnosis. Automatically detecting tumors and extracting lesion boundaries in ultrasound images is difficult due to their specular nature and the variance in shape and appearance of sonographic lesions. Past work on automated ultrasonic breast lesion segmentation has not addressed important issues such as shadowing artifacts or dealing with similar tumor like structures in the sonogram. Algorithms that claim to automatically classify ultrasonic breast lesions, rely on manual delineation of the tumor boundaries. In this paper, we present a novel technique to automatically find lesion margins in ultrasound images, by combining intensity and texture with empirical domain specific knowledge along with directional gradient and a deformable shape-based model. The images are first filtered to remove speckle noise and then contrast enhanced to emphasize the tumor regions. For the first time, a mathematical formulation of the empirical rules used by radiologists in detecting ultrasonic breast lesions, popularly known as the \"Stavros Criteria\" is presented in this paper. We have applied this formulation to automatically determine a seed point within the image. Probabilistic classification of image pixels based on intensity and texture is followed by region growing using the automatically determined seed point to obtain an initial segmentation of the lesion. Boundary points are found on the directional gradient of the image. Outliers are removed by a process of recursive refinement. These boundary points are then supplied as an initial estimate to a deformable model. Incorporating empirical domain specific knowledge along with low and high-level knowledge makes it possible to avoid shadowing artifacts and lowers the chance of confusing similar tumor like structures for the lesion. The system was validated on a database of breast sonograms for 42 patients. The average mean boundary error between manual and automated segmentation was 6.6 pixels and the normalized true positive area overlap was 75.1%. The algorithm was found to be robust to 1) variations in system parameters, 2) number of training samples used, and 3) the position of the seed point within the tumor. Running time for segmenting a single sonogram was 18 s on a 1.8-GHz Pentium machine.",
"title": ""
},
{
"docid": "09623c821f05ffb7840702a5869be284",
"text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.",
"title": ""
},
{
"docid": "cd1c983fcf0b6225ede1504db701962a",
"text": "The method introduced in this paper aims at helping deep learning practitioners faced with an overfit problem. The idea is to replace, in a multi-branch network, the standard summation of parallel branches with a stochastic affine combination. Applied to 3-branch residual networks, shake-shake regularization improves on the best single shot published results on CIFAR-10 and CIFAR100 by reaching test errors of 2.86% and 15.85%. Experiments on architectures without skip connections or Batch Normalization show encouraging results and open the door to a large set of applications. Code is available at https://github.com/xgastaldi/shake-shake.",
"title": ""
},
{
"docid": "2b00f2b02fa07cdd270f9f7a308c52c5",
"text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.",
"title": ""
},
{
"docid": "c68668c82d2512cdea187ad7f94d2939",
"text": "Traditional personalized video recommendation methods focus on utilizing user profile or user history behaviors to model user interests, which follows a static strategy and fails to capture the swift shift of the short-term interests of users. According to our cross-platform data analysis, the information emergence and propagation is faster in social textual stream-based platforms than that in multimedia sharing platforms at micro user level. Inspired by this, we propose a dynamic user modeling strategy to tackle personalized video recommendation issues in the multimedia sharing platform YouTube, by transferring knowledge from the social textual stream-based platform Twitter. In particular, the cross-platform video recommendation strategy is divided into two steps. (1) Real-time hot topic detection: the hot topics that users are currently following are extracted from users' tweets, which are utilized to obtain the related videos in YouTube. (2) Time-aware video recommendation: for the target user in YouTube, the obtained videos are ranked by considering the user profile in YouTube, time factor, and quality factor to generate the final recommendation list. In this way, the short-term (hot topics) and long-term (user profile) interests of users are jointly considered. Carefully designed experiments have demonstrated the advantages of the proposed method.",
"title": ""
},
{
"docid": "98e557f291de3b305a91e47f59a9ed34",
"text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"title": ""
},
{
"docid": "9d55947637b358c4dc30d7ba49885472",
"text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;",
"title": ""
},
{
"docid": "fa0674b3e79c1573af621276caef9709",
"text": "BACKGROUND\nDuring treatment of upper auricular malformations, the author found that patients with cryptotia and patients with solitary helical and/or antihelical adhesion malformations showed the same anatomical finding of cartilage adhesion. The author defined them together as upper auricular adhesion malformations.\n\n\nMETHODS\nBetween March of 1992 and March of 2006, 194 upper auricular adhesion malformations were corrected in 137 patients. All of these cases were retrospectively studied and classified. Of these, 92 malformations in 68 recent patients were corrected with new surgical methods (these were followed up for more than 6 months).\n\n\nRESULTS\nThe group of solitary helical and/or antihelical cartilage malformation patients was classified as group I and the cryptotia group as group II. These two groups were subdivided according to features of cartilage adhesion and classified into seven subgroups. Thirty-two malformations were classified as belonging to group I and 162 malformations to group II. There were 61 patients with bilateral upper auricular adhesion malformations. Nineteen patients (31 percent of the patients with bilateral malformations) showed malformations belonging to both groups I and II on both ears. On postoperative observation in patients corrected with new methods, it was noticed that the following unfavorable results had occurred in 18 upper auricular adhesion malformation cases (20 percent): venous congestion or partial skin necrosis of used flaps, \"pinched antitragus,\" low-set upper auricle, hypertrophic scars, and baldness.\n\n\nCONCLUSIONS\nThe new consideration for, and the singling out of, upper auricular adhesion malformation can lead to better understanding of the groups of upper auricular malformations to which it belongs, the decision for treatment, and, possibly, clarification of the pathophysiology in the future.",
"title": ""
},
{
"docid": "6be148b33b338193ffbde2683ddc8991",
"text": "Predicting stock exchange rates is receiving increasing attention and is a vital financial problem as it contributes to the development of effective strategies for stock exchange transactions. The forecasting of stock price movement in general is considered to be a thought-provoking and essential task for financial time series' exploration. In this paper, a Least Absolute Shrinkage and Selection Operator (LASSO) method based on a linear regression model is proposed as a novel method to predict financial market behavior. LASSO method is able to produce sparse solutions and performs very well when the numbers of features are less as compared to the number of observations. Experiments were performed with Goldman Sachs Group Inc. stock to determine the efficiency of the model. The results indicate that the proposed model outperforms the ridge linear regression model.",
"title": ""
},
{
"docid": "87e8b5b75b5e83ebc52579e8bbae04f0",
"text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.",
"title": ""
},
{
"docid": "c3b652b561e38a51f1fa40483532e22d",
"text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.",
"title": ""
}
] |
scidocsrr
|
003ce65c8e4263daa4fc7fbaa176246d
|
Dual-Polarization Slot Antenna With High Cross-Polarization Discrimination for Indoor Small-Cell MIMO Systems
|
[
{
"docid": "1010eaf1bb85e14c31b5861367115b32",
"text": "This letter presents a design of a dual-orthogonal, linear polarized antenna for the UWB-IR technology in the frequency range from 3.1 to 10.6 GHz. The antenna is compact with dimensions of 40 times 40 mm of the radiation plane, which is orthogonal to the radiation direction. Both the antenna and the feeding network are realized in planar technology. The radiation principle and the computed design are verified by a prototype. The input impedance matching is better than -6 dB. The measured results show a mean gain in copolarization close to 4 dBi. The cross-polarizations suppression w.r.t. the copolarization is better than 20 dB. Due to its features, the antenna is suited for polarimetric ultrawideband (UWB) radar and UWB multiple-input-multiple-output (MIMO) applications.",
"title": ""
}
] |
[
{
"docid": "ffbe5d7219abcb5f7cef4be54302e3a0",
"text": "Modern medical care is influenced by two paradigms: 'evidence-based medicine' and 'patient-centered medicine'. In the last decade, both paradigms rapidly gained in popularity and are now both supposed to affect the process of clinical decision making during the daily practice of physicians. However, careful analysis shows that they focus on different aspects of medical care and have, in fact, little in common. Evidence-based medicine is a rather young concept that entered the scientific literature in the early 1990s. It has basically a positivistic, biomedical perspective. Its focus is on offering clinicians the best available evidence about the most adequate treatment for their patients, considering medicine merely as a cognitive-rational enterprise. In this approach the uniqueness of patients, their individual needs and preferences, and their emotional status are easily neglected as relevant factors in decision-making. Patient-centered medicine, although not a new phenomenon, has recently attracted renewed attention. It has basically a humanistic, biopsychosocial perspective, combining ethical values on 'the ideal physician', with psychotherapeutic theories on facilitating patients' disclosure of real worries, and negotiation theories on decision making. It puts a strong focus on patient participation in clinical decision making by taking into account the patients' perspective, and tuning medical care to the patients' needs and preferences. However, in this approach the ideological base is better developed than its evidence base. In modern medicine both paradigms are highly relevant, but yet seem to belong to different worlds. The challenge for the near future is to bring these separate worlds together. The aim of this paper is to give an impulse to this integration. Developments within both paradigms can benefit from interchanging ideas and principles from which eventually medical care will benefit. In this process a key role is foreseen for communication and communication research.",
"title": ""
},
{
"docid": "59a407f744fa5686c09e05e58a7e8373",
"text": "More than 50 years ago, John Bell proved that no theory of nature that obeys locality and realism can reproduce all the predictions of quantum theory: in any local-realist theory, the correlations between outcomes of measurements on distant particles satisfy an inequality that can be violated if the particles are entangled. Numerous Bell inequality tests have been reported; however, all experiments reported so far required additional assumptions to obtain a contradiction with local realism, resulting in ‘loopholes’. Here we report a Bell experiment that is free of any such additional assumption and thus directly tests the principles underlying Bell’s inequality. We use an event-ready scheme that enables the generation of robust entanglement between distant electron spins (estimated state fidelity of 0.92 ± 0.03). Efficient spin read-out avoids the fair-sampling assumption (detection loophole), while the use of fast random-basis selection and spin read-out combined with a spatial separation of 1.3 kilometres ensure the required locality conditions. We performed 245 trials that tested the CHSH–Bell inequality S ≤ 2 and found S = 2.42 ± 0.20 (where S quantifies the correlation between measurement outcomes). A null-hypothesis test yields a probability of at most P = 0.039 that a local-realist model for space-like separated sites could produce data with a violation at least as large as we observe, even when allowing for memory in the devices. Our data hence imply statistically significant rejection of the local-realist null hypothesis. This conclusion may be further consolidated in future experiments; for instance, reaching a value of P = 0.001 would require approximately 700 trials for an observed S = 2.4. With improvements, our experiment could be used for testing less-conventional theories, and for implementing device-independent quantum-secure communication and randomness certification.",
"title": ""
},
{
"docid": "5946378b291a1a0e1fb6df5cd57d716f",
"text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)",
"title": ""
},
{
"docid": "9bc90b182e3acd0fd0cfa10a7abc32f8",
"text": "The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.",
"title": ""
},
{
"docid": "f5deca0eaba55d34af78df82eda27bc2",
"text": "Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR’s adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method.",
"title": ""
},
{
"docid": "8b675cc47b825268837a7a2b5a298dc9",
"text": "Artificial Intelligence chatbot is a technology that makes interaction between man and machine possible by using natural language. In this paper, we proposed an architectural design of a chatbot that will function as virtual diabetes physician/doctor. This chatbot will allow diabetic patients to have a diabetes control/management advice without the need to go to the hospital. A general history of a chatbot, a brief description of each chatbots is discussed. We proposed the design of a new technique that will be implemented in this chatbot as the key component to function as diabetes physician. Using this design, chatbot will remember the conversation path through parameter called Vpath. Vpath will allow chatbot to gives a response that is mostly suitable for the whole conversation as it specifically designed to be a virtual diabetes physician.",
"title": ""
},
{
"docid": "6a7f99969d377ea27de68394ac6843c9",
"text": "There is growing evidence that converting targets to soft targets in supervised learning can provide considerable gains in performance. Much of this work has considered classification, converting hard zero-one values to soft labels—such as by adding label noise, incorporating label ambiguity or using distillation. In parallel, there is some evidence from a regression setting in reinforcement learning that learning distributions can improve performance. In this work, we investigate the reasons for this improvement, in a regression setting. We introduce a novel distributional regression loss, and similarly find it significantly improves prediction accuracy. We investigate several common hypotheses, around reducing overfitting and improved representations. We instead find evidence for an alternative hypothesis: this loss is easier to optimize, with better behaved gradients, resulting in improved generalization. We provide theoretical support for this alternative hypothesis, by characterizing the norm of the gradients of this loss.",
"title": ""
},
{
"docid": "125513cbb52c4ef868988a3060070d95",
"text": "In this paper, we propose a new algorithm using spherical symmetric three dimensional local ternary patterns (SS-3D-LTP) for natural, texture and biomedical image retrieval applications. The existing local binary patterns (LBP), local ternary patterns (LTP), local derivative patterns (LDP), local tetra patterns (LTrP) etc., are encode the relationship between the center pixel and its surrounding neighbors in two dimensional (2D) local region of an image. The proposed method encodes the relationship between the center pixel and its surrounding neighbors with five selected directions in 3D plane which is generated from 2D image using multiresolution Gaussian filter bank. In addition, we propose the color SS-3D-LTP (CSS-3D-LTP) where we consider the RGB spaces as three planes of 3D volume. Three experiments have been carried out for proving the worth of our algorithm for natural, texture and biomedical image retrieval applications. It is further mentioned that the databases used for natural, texture and biomedical image retrieval applications are Corel-10K, Brodatz and open access series of imaging studies (OASIS) magnetic resonance databases respectively. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to the start-of-art spatial as well as transform domain techniques on respective databases. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3a1bbaea6dae7f72a5276a32326884fe",
"text": "Statistics suggests that there are around 40 cases per million of quadriplegia every year. Great people like Stephen Hawking have been suffering from this phenomenon. Our project attempts to make lives of the people suffering from this phenomenon simple by helping them move around on their own and not being a burden on others. The idea is to create an Eye Controlled System which enables the movement of the patient’s wheelchair depending on the movements of eyeball. A person suffering from quadriplegia can move his eyes and partially tilt his head, thus giving is an opportunity for detecting these movements. There are various kinds of interfaces developed for powered wheelchair and also there are various new techniques invented but these are costly and not affordable to the poor and needy people. In this paper, we have proposed the simpler and cost effective method of developing wheelchair. We have created a system wherein a person sitting on this automated Wheel Chair with a camera mounted on it, is able to move in a direction just by looking in that direction by making eye movements. The captured camera signals are then send to PC and controlled MATLAB, which will then be send to the Arduino circuit over the Serial Interface which in turn will control motors and allow the wheelchair to move in a particular direction. The system is affordable and hence can be used by patients spread over a large economy range. KeywordsAutomatic wheelchair, Iris Movement Detection, Servo Motor, Daugman’s algorithm, Arduino.",
"title": ""
},
{
"docid": "5d673d1b6755e3e1d451ca17644cf3ec",
"text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm’s key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.",
"title": ""
},
{
"docid": "163c0be28804445bd99ad3e4a4e2c6dd",
"text": "We are witnessing a confluence between applied cryptography and secure hardware systems in enabling secure cloud computing. On one hand, work in applied cryptography has enabled efficient, oblivious data-structures and memory primitives. On the other, secure hardware and the emergence of Intel SGX has enabled a low-overhead and mass market mechanism for isolated execution. By themselves these technologies have their disadvantages. Oblivious memory primitives carry high performance overheads, especially when run non-interactively. Intel SGX, while more efficient, suffers from numerous softwarebased side-channel attacks, high context switching costs, and bounded memory size. In this work we build a new library of oblivious memory primitives, which we call ZeroTrace. ZeroTrace is designed to carefully combine state-of-the-art oblivious RAM techniques and SGX, while mitigating individual disadvantages of these technologies. To the best of our knowledge, ZeroTrace represents the first oblivious memory primitives running on a real secure hardware platform. ZeroTrace simultaneously enables a dramatic speed-up over pure cryptography and protection from softwarebased side-channel attacks. The core of our design is an efficient and flexible block-level memory controller that provides oblivious execution against any active software adversary, and across asynchronous SGX enclave terminations. Performance-wise, the memory controller can service requests for 4 B blocks in 1.2 ms and 1 KB blocks in 3.4 ms (given a 10 GB dataset). On top of our memory controller, we evaluate Set/Dictionary/List interfaces which can all perform basic operations (e.g., get/put/insert).",
"title": ""
},
{
"docid": "a427c3c0bcbfa10ce9ec1e7477697abe",
"text": "We present a system for real-time general object recognition (gor) for indoor robot in complex scenes. A point cloud image containing the object to be recognized from a Kinect sensor, for general object at will, must be extracted a point cloud model of the object with the Cluster Extraction method, and then we can compute the global features of the object model, making up the model database after processing many frame images. Here the global feature we used is Clustered Viewpoint Feature Histogram (CVFH) feature from Point Cloud Library (PCL). For real-time gor we must preprocess all the point cloud images streamed from the Kinect into clusters based on a clustering threshold and the min-max cluster sizes related to the size of the model, for reducing the amount of the clusters and improving the processing speed, and also compute the CVFH features of the clusters. For every cluster of a frame image, we search the several nearer features from the model database with the KNN method in the feature space, and we just consider the nearest model. If the strings of the model name contain the strings of the object to be recognized, it can be considered that we have recognized the general object; otherwise, we compute another cluster again and perform the above steps. The experiments showed that we had achieved the real-time recognition, and ensured the speed and accuracy for the gor.",
"title": ""
},
{
"docid": "c34e2227c97f71fbe3d2514e1e77e6e6",
"text": "A major difficulty in a recommendation system for groups is to use a group aggregation strategy to ensure, among other things, the maximization of the average satisfaction of group members. This paper presents an approach based on the theory of noncooperative games to solve this problem. While group members can be seen as game players, the items for potential recommendation for the group comprise the set of possible actions. Achieving group satisfaction as a whole becomes, then, a problem of finding the Nash equilibrium. Experiments with a MovieLens dataset and a function of arithmetic mean to compute the prediction of group satisfaction for the generated recommendation have shown statistically significant results when compared to state-of-the-art aggregation strategies, in particular, when evaluation among group members are more heterogeneous. The feasibility of this unique approach is shown by the development of an application for Facebook, which recommends movies to groups of friends.",
"title": ""
},
{
"docid": "9d3082cf18bad02c9e60f75985e0320a",
"text": "Defocus blur is extremely common in images captured using optical imaging systems. It may be undesirable, but may also be an intentional artistic effect, thus it can either enhance or inhibit our visual perception of the image scene. For tasks, such as image restoration and object recognition, one might want to segment a partially blurred image into blurred and non-blurred regions. In this paper, we propose a sharpness metric based on local binary patterns and a robust segmentation algorithm to separate in- and out-of-focus image regions. The proposed sharpness metric exploits the observation that most local image patches in blurry regions have significantly fewer of certain local binary patterns compared with those in sharp regions. Using this metric together with image matting and multi-scale inference, we obtained high-quality sharpness maps. Tests on hundreds of partially blurred images were used to evaluate our blur segmentation algorithm and six comparator methods. The results show that our algorithm achieves comparative segmentation results with the state of the art and have big speed advantage over the others.",
"title": ""
},
{
"docid": "92a14466bd675f10edb509765cbae18d",
"text": "Conditional Random Field (CRF) and recurrent neural models have achieved success in structured prediction. More re cently, there is a marriage of CRF and recurrent neural models, so that we can gain from both non-linear dense features and globally normalized CRF objective. These recurrent neu ral CRF models mainly focus on encode node features in CRF undirected graphs. However, edge features prove important to CRF in structured prediction. In this work, we introduce a new recurrent neural CRF model, which learns non-linear edge features, and thus makes non-linear featur es encoded completely. We compare our model with different neural models in well-known structured prediction tasks. E xperiments show that our model outperforms state-of-the-ar t methods in NP chunking, shallow parsing, Chinese word segmentation and POS tagging.",
"title": ""
},
{
"docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2",
"text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.",
"title": ""
},
{
"docid": "666286c367730ca87e0bb7da60796f9e",
"text": "In this paper, we are presenting a new model for interactive music. Unlike most interactive systems, our model is based on file organization, but does not require digital audio treatments. This model includes a definition of a constraints system and its solver. The products of this project are intended for the general public, inexperienced users, as well as professional musicians, and will be distributed commercially. We are here presenting three products of this project. The difficulty of this project is to design a technology and software products for interactive music which must be easy to use by the general public and by professional composers.",
"title": ""
},
{
"docid": "3bc1d25f1d3027ef12bcdfce035524b7",
"text": "The empirical assessment of test techniques plays an important role in software testing research. One common practice is to seed faults in subject software, either manually or by using a program that generates all possible mutants based on a set of mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults, thus facilitating the statistical analysis of fault detection effectiveness of test suites; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. Focusing on four common control and data flow criteria (block, decision, C-use, and P-use), this paper investigates this important issue based on a middle size industrial program with a comprehensive pool of test cases and known faults. Based on the data available thus far, the results are very consistent across the investigated criteria as they show that the use of mutation operators is yielding trustworthy results: generated mutants can be used to predict the detection effectiveness of real faults. Applying such a mutation analysis, we then investigate the relative cost and effectiveness of the above-mentioned criteria by revisiting fundamental questions regarding the relationships between fault detection, test suite size, and control/data flow coverage. Although such questions have been partially investigated in previous studies, we can use a large number of mutants, which helps decrease the impact of random variation in our analysis and allows us to use a different analysis approach. Our results are then; compared with published studies, plausible reasons for the differences are provided, and the research leads us to suggest a way to tune the mutation analysis process to possible differences in fault detection probabilities in a specific environment",
"title": ""
},
{
"docid": "5c39dc517bb037cc4f1971d68d8a6416",
"text": "Computer Aided Design (CAD) is a multi-billion dollar industry used by almost every mechanical engineer in the world to create practically every existing manufactured shape. CAD models are not only widely available but also extremely useful in the growing field of fabrication-oriented design because they are parametric by construction and capture the engineer's design intent, including manufacturability. Harnessing this data, however, is challenging, because generating the geometry for a given parameter value requires time-consuming computations. Furthermore, the resulting meshes have different combinatorics, making the mesh data inherently discontinuous with respect to parameter adjustments. In our work, we address these challenges and develop tools that allow interactive exploration and optimization of parametric CAD data. To achieve interactive rates, we use precomputation on an adaptively sampled grid and propose a novel scheme for interpolating in this domain where each sample is a mesh with different combinatorics. Specifically, we extract partial correspondences from CAD representations for local mesh morphing and propose a novel interpolation method for adaptive grids that is both continuous/smooth and local (i.e., the influence of each sample is constrained to the local regions where mesh morphing can be computed). We show examples of how our method can be used to interactively visualize and optimize objects with a variety of physical properties.",
"title": ""
},
{
"docid": "97af9704b898bebe4dae43c1984bc478",
"text": "In earlier work we have shown that adults, young children, and infants are capable of computing transitional probabilities among adjacent syllables in rapidly presented streams of speech, and of using these statistics to group adjacent syllables into word-like units. In the present experiments we ask whether adult learners are also capable of such computations when the only available patterns occur in non-adjacent elements. In the first experiment, we present streams of speech in which precisely the same kinds of syllable regularities occur as in our previous studies, except that the patterned relations among syllables occur between non-adjacent syllables (with an intervening syllable that is unrelated). Under these circumstances we do not obtain our previous results: learners are quite poor at acquiring regular relations among non-adjacent syllables, even when the patterns are objectively quite simple. In subsequent experiments we show that learners are, in contrast, quite capable of acquiring patterned relations among non-adjacent segments-both non-adjacent consonants (with an intervening vocalic segment that is unrelated) and non-adjacent vowels (with an intervening consonantal segment that is unrelated). Finally, we discuss why human learners display these strong differences in learning differing types of non-adjacent regularities, and we conclude by suggesting that these contrasts in learnability may account for why human languages display non-adjacent regularities of one type much more widely than non-adjacent regularities of the other type.",
"title": ""
}
] |
scidocsrr
|
9183ceacd5b04beab7ade40885816e8d
|
Leveraging disjoint communities for detecting overlapping community structure
|
[
{
"docid": "f96bf84a4dfddc8300bb91227f78b3af",
"text": "Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy. The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and realworld networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.",
"title": ""
}
] |
[
{
"docid": "3e5d887ff00e4eff8e408e6d51d747b2",
"text": "We present a small object sensitive method for object detection. Our method is built based on SSD (Single Shot MultiBox Detector (Liu et al. 2016)), a simple but effective deep neural network for image object detection. The discrete nature of anchor mechanism used in SSD, however, may cause misdetection for the small objects located at gaps between the anchor boxes. SSD performs better for small object detection after circular shifts of the input image. Therefore, auxiliary feature maps are generated by conducting circular shifts over lower extra feature maps in SSD for small-object detection, which is equivalent to shifting the objects in order to fit the locations of anchor boxes. We call our proposed system Shifted SSD. Moreover, pinpoint accuracy of localization is of vital importance to small objects detection. Hence, two novel methods called Smooth NMS and IoU-Prediction module are proposed to obtain more precise locations. Then for video sequences, we generate trajectory hypothesis to obtain predicted locations in a new frame for further improved performance. Experiments conducted on PASCAL VOC 2007, along with MS COCO, KITTI and our small object video datasets, validate that both mAP and recall are improved with different degrees and the speed is almost the same as SSD.",
"title": ""
},
{
"docid": "4ac734960f264716721a0f0fa5305925",
"text": "Most of recent research on layered chalcogenides is understandably focused on single atomic layers. However, it is unclear if single-layer units are the most ideal structures for enhanced gas-solid interactions. To probe this issue further, we have prepared large-area MoS2 sheets ranging from single to multiple layers on 300 nm SiO2/Si substrates using the micromechanical exfoliation method. The thickness and layering of the sheets were identified by optical microscope, invoking recently reported specific optical color contrast, and further confirmed by AFM and Raman spectroscopy. The MoS2 transistors with different thicknesses were assessed for gas-sensing performances with exposure to NO2, NH3, and humidity in different conditions such as gate bias and light irradiation. The results show that, compared to the single-layer counterpart, transistors of few MoS2 layers exhibit excellent sensitivity, recovery, and ability to be manipulated by gate bias and green light. Further, our ab initio DFT calculations on single-layer and bilayer MoS2 show that the charge transfer is the reason for the decrease in resistance in the presence of applied field.",
"title": ""
},
{
"docid": "f45a291e721f77868c45d42b1b8827c7",
"text": "In this paper we present SAMSA, a new tool for the simulation of VHDL-AMS systems in Matlab. The goal is the definition of a VHDL framework in which analog/digital systems can be designed and simulated and new simulation techniques can be studied, exploiting both the powerful Matlab functions and Toolboxes.",
"title": ""
},
{
"docid": "41df967b371c9e649a551706c87025a0",
"text": "Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome—a quantum state indicating which error has occurred—by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.",
"title": ""
},
{
"docid": "1156e19011c722404e077ae64f6e526f",
"text": "Malwares are malignant softwares. It is designed t o amage computer systems without the knowledge of the owner using the system. Softwares from reputabl e vendors also contain malicious code that affects the system or leaks informations to remote servers. Malwares incl udes computer viruses, Worms, spyware, dishonest ad -ware, rootkits, Trojans, dialers etc. Malware is one of t he most serious security threats on the Internet to day. In fact, most Internet problems such as spam e-mails and denial o f service attacks have malwareas their underlying c ause. Computers that are compromised with malware are oft en networked together to form botnets and many atta cks re launched using these malicious, attacker controlled n tworks. The paper focuses on various Malware det ction and removal methods. KeywordsMalware, Intruders, Checksum, Digital Immune System , Behavior blocker",
"title": ""
},
{
"docid": "d3682d2a9e11f80a51c53659c9b6623d",
"text": "Despite the considerable clinical impact of congenital human cytomegalovirus (HCMV) infection, the mechanisms of maternal–fetal transmission and the resultant placental and fetal damage are largely unknown. Here, we discuss animal models for the evaluation of CMV vaccines and virus-induced pathology and particularly explore surrogate human models for HCMV transmission and pathogenesis in the maternal–fetal interface. Studies in floating and anchoring placental villi and more recently, ex vivo modeling of HCMV infection in integral human decidual tissues, provide unique insights into patterns of viral tropism, spread, and injury, defining the outcome of congenital infection, and the effect of potential antiviral interventions.",
"title": ""
},
{
"docid": "555ad116b9b285051084423e2807a0ba",
"text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '",
"title": ""
},
{
"docid": "59a32ec5b88436eca75d8fa9aa75951b",
"text": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types where images are treated as first-class citizens. Both the prediction of relations between unseen images and multi-relational image retrieval can be formulated as query types in a visual-relational KG. We approach the problem of answering such queries with a novel combination of deep convolutional networks and models for learning knowledge graph embeddings. The resulting models can answer queries such as “How are these two unseen images related to each other?\" We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The multi-relational grounding of unseen entity images into a knowledge graph serves as the description of such an entity. We conduct experiments to demonstrate that the proposed deep architectures in combination with KG embedding objectives can answer the visual-relational queries efficiently and accurately.",
"title": ""
},
{
"docid": "7c1c7eb4f011ace0734dd52759ce077f",
"text": "OBJECTIVES\nTo investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke.\n\n\nDESIGN\nA randomized controlled trial.\n\n\nSETTING\nOccupational therapy clinics in medical centers.\n\n\nSUBJECTS\nThirty-one subacute stroke patients were recruited.\n\n\nINTERVENTIONS\nParticipants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device.\n\n\nMAIN MEASURES\nMotor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale.\n\n\nRESULTS\nThe primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group.\n\n\nCONCLUSION\nBilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.",
"title": ""
},
{
"docid": "a671f82f767feebc84f0772d3f9569d6",
"text": "The long short-term memory (LSTM) neural network utilizes specialized modulation mechanisms to store information for extended periods of time. It is thus potentially well-suited for complex visual processing, where the current video frame must be considered in the context of past frames. Recent studies have indeed shown that LSTM can effectively recognize and classify human actions (e.g., running, hand waving) in video data; however, these results were achieved under somewhat restricted settings. In this effort, we seek to demonstrate that LSTM's performance remains robust even as experimental conditions deteriorate. Specifically, we show that classification accuracy exhibits graceful degradation when the LSTM network is faced with (a) lower quantities of available training data, (b) tighter deadlines for decision making (i.e., shorter available input data sequences) and (c) poorer video quality (resulting from noise, dropped frames or reduced resolution). We also clearly demonstrate the benefits of memory for video processing, particularly, under high noise or frame drop rates. Our study is thus an initial step towards demonstrating LSTM's potential for robust action recognition in real-world scenarios.",
"title": ""
},
{
"docid": "bda90d8f3b9cf98f714c1a4bfb7a9f61",
"text": "Learning image similarity metrics in an end-to-end fashion with deep networks has demonstrated excellent results on tasks such as clustering and retrieval. However, current methods, all focus on a very local view of the data. In this paper, we propose a new metric learning scheme, based on structured prediction, that is aware of the global structure of the embedding space, and which is designed to optimize a clustering quality metric (NMI). We show state of the art performance on standard datasets, such as CUB200-2011 [37], Cars196 [18], and Stanford online products [30] on NMI and R@K evaluation metrics.",
"title": ""
},
{
"docid": "0e5774cd78d71067a7dc07cbea5417e3",
"text": "Web applications are popular targets for cyber-attacks because they are network-accessible and often contain vulnerabilities. An intrusion detection system monitors web applications and issues alerts when an attack attempt is detected. Existing implementations of intrusion detection systems usually extract features from network packets or string characteristics of input that are manually selected as relevant to attack analysis. Manually selecting features, however, is time-consuming and requires in-depth security domain knowledge. Moreover, large amounts of labeled legitimate and attack request data are needed by supervised learning algorithms to classify normal and abnormal behaviors, which is often expensive and impractical to obtain for production web applications. This paper provides three contributions to the study of autonomic intrusion detection systems. First, we evaluate the feasibility of an unsupervised/semi-supervised approach for web attack detection based on the Robust Software Modeling Tool (RSMT), which autonomically monitors and characterizes the runtime behavior of web applications. Second, we describe how RSMT trains a stacked denoising autoencoder to encode and reconstruct the call graph for end-to-end deep learning, where a low-dimensional representation of the raw features with unlabeled request data is used to recognize anomalies by computing the reconstruction error of the request data. Third, we analyze the results of empirically testing RSMT on both synthetic datasets and production applications with intentional vulnerabilities. Our results show that the proposed approach can efficiently and accurately detect attacks, including SQL injection, cross-site scripting, and deserialization, with minimal domain knowledge and little labeled training data.",
"title": ""
},
{
"docid": "add6957a74f1df33e21bf1923732ddc4",
"text": "Conversational search and recommendation based on user-system dialogs exhibit major differences from conventional search and recommendation tasks in that 1) the user and system can interact for multiple semantically coherent rounds on a task through natural language dialog, and 2) it becomes possible for the system to understand the user needs or to help users clarify their needs by asking appropriate questions from the users directly. We believe the ability to ask questions so as to actively clarify the user needs is one of the most important advantages of conversational search and recommendation. In this paper, we propose and evaluate a unified conversational search/recommendation framework, in an attempt to make the research problem doable under a standard formalization. Specifically, we propose a System Ask -- User Respond (SAUR) paradigm for conversational search, define the major components of the paradigm, and design a unified implementation of the framework for product search and recommendation in e-commerce. To accomplish this, we propose the Multi-Memory Network (MMN) architecture, which can be trained based on large-scale collections of user reviews in e-commerce. The system is capable of asking aspect-based questions in the right order so as to understand the user needs, while (personalized) search is conducted during the conversation, and results are provided when the system feels confident. Experiments on real-world user purchasing data verified the advantages of conversational search and recommendation against conventional search and recommendation algorithms in terms of standard evaluation measures such as NDCG.",
"title": ""
},
{
"docid": "d565220c9e4b9a4b9f8156434b8b4cd3",
"text": "Decision Support Systems (DDS) have developed to exploit Information Technology (IT) to assist decision-makers in a wide variety of fields. The need to use spatial data in many of these diverse fields has led to increasing interest in the development of Spatial Decision Support Systems (SDSS) based around the Geographic Information System (GIS) technology. The paper examines the relationship between SDSS and GIS and suggests that SDSS is poised for further development owing to improvement in technology and the greater availability of spatial data.",
"title": ""
},
{
"docid": "6d89321d33ba5d923a7f31589888f430",
"text": "OBJECTIVE\nThe pain experienced by burn patients during physical therapy range of motion exercises can be extreme and can discourage patients from complying with their physical therapy. We explored the novel use of immersive virtual reality (VR) to distract patients from pain during physical therapy.\n\n\nSETTING\nThis study was conducted at the burn care unit of a regional trauma center.\n\n\nPATIENTS\nTwelve patients aged 19 to 47 years (average of 21% total body surface area burned) performed range of motion exercises of their injured extremity under an occupational therapist's direction.\n\n\nINTERVENTION\nEach patient spent 3 minutes of physical therapy with no distraction and 3 minutes of physical therapy in VR (condition order randomized and counter-balanced).\n\n\nOUTCOME MEASURES\nFive visual analogue scale pain scores for each treatment condition served as the dependent variables.\n\n\nRESULTS\nAll patients reported less pain when distracted with VR, and the magnitude of pain reduction by VR was statistically significant (e.g., time spent thinking about pain during physical therapy dropped from 60 to 14 mm on a 100-mm scale). The results of this study may be examined in more detail at www.hitL.washington.edu/projects/burn/.\n\n\nCONCLUSIONS\nResults provided preliminary evidence that VR can function as a strong nonpharmacologic pain reduction technique for adult burn patients during physical therapy and potentially for other painful procedures or pain populations.",
"title": ""
},
{
"docid": "ae18e923e22687f66303c7ff07689f38",
"text": "Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle differences in some specific parts. Most previous works rely on object / part level annotations to build part-based representation, which is demanding in practical applications. This paper proposes an automatic fine-grained recognition approach which is free of any object / part annotation at both training and testing stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher Vectors. We conditionally pick deep filter responses to encode them into the final representation, which considers the importance of filter responses themselves. Integrating all these techniques produces a much more powerful framework, and experiments conducted on CUB-200-2011 and Stanford Dogs demonstrate the superiority of our proposed algorithm over the existing methods.",
"title": ""
},
{
"docid": "785ca963ea1f9715cdea9baede4c6081",
"text": "In this paper, factor analysis is applied on a set of data that was collected to study the effectiveness of 58 different agile practices. The analysis extracted 15 factors, each was associated with a list of practices. These factors with the associated practices can be used as a guide for agile process improvement. Correlations between the extracted factors were calculated, and the significant correlation findings suggested that people who applied iterative and incremental development and quality assurance practices had a high success rate, that communication with the customer was not very popular as it had negative correlations with governance and iterative and incremental development. Also, people who applied governance practices also applied quality assurance practices. Interestingly success rate related negatively with traditional analysis methods such as Gantt chart and detailed requirements specification.",
"title": ""
},
{
"docid": "cee66cf1d7d44e4a21d0aeb2e6d0ff64",
"text": "Generating images of texture mapped geometry requires projecting surfaces onto a two-dimensional screen. If this projection involves perspective, then a division must be performed at each pixel of the projected surface in order to correctly calculate texture map coordinates. We show how a simple extension to perspective-comect texture mapping can be used to create various lighting effects, These include arbitrary projection of two-dimensional images onto geometry, realistic spotlights, and generation of shadows using shadow maps[ 10]. These effects are obtained in real time using hardware that performs correct texture mapping. CR",
"title": ""
},
{
"docid": "adc51e9fdbbb89c9a47b55bb8823c7fe",
"text": "State-of-the-art model counters are based on exhaustive DPLL algorithms, and have been successfully used in probabilistic reasoning, one of the key problems in AI. In this article, we present a new exhaustive DPLL algorithm with a formal semantics, a proof of correctness, and a modular design. The modular design is based on the separation of the core model counting algorithm from SAT solving techniques. We also show that the trace of our algorithm belongs to the language of Sentential Decision Diagrams (SDDs), which is a subset of Decision-DNNFs, the trace of existing state-of-the-art model counters. Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.",
"title": ""
},
{
"docid": "12fc03e32fcbe17fb6b45cd89ea7c309",
"text": "The synthesis of lanthanide-activated phosphors is pertinent to many emerging applications, ranging from high-resolution luminescence imaging to next-generation volumetric full-color display. In particular, the optical processes governed by the 4f-5d transitions of divalent and trivalent lanthanides have been the key to enabling precisely tuned color emission. The fundamental importance of lanthanide-activated phosphors for the physical and biomedical sciences has led to rapid development of novel synthetic methodologies and relevant tools that allow for probing the dynamics of energy transfer processes. Here, we review recent progress in developing methods for preparing lanthanide-activated phosphors, especially those featuring 4f-5d optical transitions. Particular attention will be devoted to two widely studied dopants, Ce3+ and Eu2+. The nature of the 4f-5d transition is examined by combining phenomenological theories with quantum mechanical calculations. An emphasis is placed on the correlation of host crystal structures with the 5d-4f luminescence characteristics of lanthanides, including quantum yield, emission color, decay rate, and thermal quenching behavior. Several parameters, namely Debye temperature and dielectric constant of the host crystal, geometrical structure of coordination polyhedron around the luminescent center, and the accurate energies of 4f and 5d levels, as well as the position of 4f and 5d levels relative to the valence and conduction bands of the hosts, are addressed as basic criteria for high-throughput computational design of lanthanide-activated phosphors.",
"title": ""
}
] |
scidocsrr
|
4934c3477bac065a03bb4d5c1be29e0f
|
Beyond Training and Awareness: From Security Culture to Security Risk Management
|
[
{
"docid": "e59379bc46c4fcf85027a1624425949b",
"text": "Information Security Culture includes all socio-cultural measures that support technical security methods, so that information security becomes a natural aspect in the daily activity of every employee. To apply these socio-cultural measures in an effective and efficient way, certain management models and tools are needed. In our research we developed a framework analyzing the security culture of an organization which we then applied in a pre-evaluation survey. This paper is based on the results of this survey. We will develop a management model for creating, changing and maintaining Information Security Culture. This model will then be used to define explicit sociocultural measures, based on the concept of internal marketing.",
"title": ""
},
{
"docid": "60de343325a305b08dfa46336f2617b5",
"text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.",
"title": ""
}
] |
[
{
"docid": "7b737b18ecf21b9da10475ee407a428b",
"text": "This paper proposes a flexible and wearable hand exoskeleton which can be used as a computer mouse. The hand exoskeleton is developed based on a new concept of wearable mouse. The wearable mouse, which consists of flexible bend sensor, accelerometer and bluetooth, is designed for comfortable and supple usage. To demonstrate the effectiveness of the proposed wearable mouse, experiments are carried out for mouse operation consisting of click, cursor movement and wireless communication. The experimental results show that our wearable mouse is more accurate than a standard mouse.",
"title": ""
},
{
"docid": "2923d1776422a1f44395f169f0d61995",
"text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.",
"title": ""
},
{
"docid": "dfae6cf3df890c8cfba756384c4e88e6",
"text": "In this paper, we propose a second order optimization method to learn models where both the dimensionality of the parameter space and the number of training samples is high. In our method, we construct on each iteratio n a Krylov subspace formed by the gradient and an approximation to the Hess ian matrix, and then use a subset of the training data samples to optimize ove r this subspace. As with the Hessian Free (HF) method of [6], the Hessian matrix i s never explicitly constructed, and is computed using a subset of data. In p ractice, as in HF, we typically use a positive definite substitute for the Hessi an matrix such as the Gauss-Newton matrix. We investigate the effectiveness of o ur proposed method on learning the parameters of deep neural networks, and comp are its performance to widely used methods such as stochastic gradient descent, conjugate gradient descent and L-BFGS, and also to HF. Our method leads to faster convergence than either L-BFGS or HF, and generally performs better than either of them in cross-validation accuracy. It is also simpler and more gene ral than HF, as it does not require a positive semi-definite approximation of the He ssian matrix to work well nor the setting of a damping parameter. The chief drawba ck versus HF is the need for memory to store a basis for the Krylov subspace.",
"title": ""
},
{
"docid": "a0184870ca9830bbce30df1615e8bd0d",
"text": "Debate on the validity and reliability of scientific methods often arises in the courtroom. When the government (i.e., the prosecution) is the proponent of evidence, the defense is obliged to challenge its admissibility. Regardless, those who seek to use DNA typing methodologies to analyze forensic biological evidence have a responsibility to understand the technology and its applications so a proper foundation(s) for its use can be laid. Mitochondrial DNA (mtDNA), an extranuclear genome, has certain features that make it desirable for forensics, namely, high copy number, lack of recombination, and matrilineal inheritance. mtDNA typing has become routine in forensic biology and is used to analyze old bones, teeth, hair shafts, and other biological samples where nuclear DNA content is low. To evaluate results obtained by sequencing the two hypervariable regions of the control region of the human mtDNA genome, one must consider the genetically related issues of nomenclature, reference population databases, heteroplasmy, paternal leakage, recombination, and, of course, interpretation of results. We describe the approaches, the impact some issues may have on interpretation of mtDNA analyses, and some issues raised in the courtroom.",
"title": ""
},
{
"docid": "a4922f728f50fa06a63b826ed84c9f24",
"text": "Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this “reality gap”. By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.",
"title": ""
},
{
"docid": "9a5be4452928d80d6be8e8e0267dafa5",
"text": "degeneration of the basal layer in the epidermis. In the dermis, perivascular or lichenoid infiltrate and the presence of melanin incontinence were the predominant changes noted. A recently developed lesion tends to show more predominant band-like lymphocytic infiltration and epidermal vacuolization rather than epidermal atrophy. Linear lesions can frequently occur at sites of scratching or trauma in patients with LP as a result of Koebner’s phenomenon, or, as in our case, they may appear spontaneously within the lines of Blaschko on the face. In acquired Blaschko linear inflammatory dermatosis, cutaneous antigenic mosaicism could be responsible for the susceptibility to induce mosaic T-cell responses. Because drugs had not been changed in type or dosage over several years of treatment, and underlying medical diseases had been well controlled, the possibility of drug-related reaction was thought to be low. Considering the clinical features in our patient, and the fact that exposed sites were frequently the first to be involved, it can be suggested that exposure to sunlight (even in a casual dose) may be a kind of stimuli to induce the lesion of LPP in a genetically susceptible patient. Usually the course is chronic and treatments are less effective for follicular LP or LPP than for classical LP. Topical tacrolimus, a member of the immunosuppressive macrolide family that suppresses T-cell activation, has been shown to be effective in the treatment of some mucosal and follicular LP. There is only one article about the successful treatment of LPP with topical tacrolimus. Although they showed over 50% improvement in seven of 13 patients after 4 months of treatment, the authors did not mention any case of complete clearance in their article. Moreover, the other six of the 13 patients did not show improvement in pigmentation. Therefore, in the present case, 1064-nm QSNY with low fluence treatment was chosen for treating pigmentation. The 1064-nm QSNY in nanosecond (ns) domain is strongly absorbed by the finely distributed melanin in dermal pigmented lesions. Moreover, 1064-nm QSNY with low fluence, which in a ‘‘top-hat’’ beam mode can evenly distribute energy density throughout the whole spot, is now widely used when treating darker skin types, because it greatly reduces the risk of epidermal injury and post-therapy dyschromia. In our patient, because of poor response to topical steroid, we started tacrolimus ointment for mainly targeting T cells, and for the treatment of pigmentation, we added QSNY treatment. It suggests that the combination treatment of 1064-nm low fluenced QSNY with topical tacrolimus may be a good therapeutic option for patients with recalcitrant facial LPP in dark-skinned individuals.",
"title": ""
},
{
"docid": "ac8df493a25afe5801a4e29b4a71c28b",
"text": "We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning. The goal of this task is to find the permutation that recovers the structure of data from shuffled versions of it. In the case of natural images, this task boils down to recovering the original image from patches shuffled by an unknown permutation matrix. Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods. To this end, we resort to a continuous approximation of these matrices using doubly-stochastic matrices which we generate from standard CNN predictions using Sinkhorn iterations. Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet, an end-to-end CNN model for this task. The utility of DeepPermNet is demonstrated on two challenging computer vision problems, namely, (i) relative attributes learning and (ii) self-supervised representation learning. Our results show state-of-the-art performance on the Public Figures and OSR benchmarks for (i) and on the classification and segmentation tasks on the PASCAL VOC dataset for (ii).",
"title": ""
},
{
"docid": "01dbc861c46c26b22cf2322678eb9ab2",
"text": "To facilitate computer analysis of visual art, in the form of paintings, we introduce Pandora (Paintings Dataset for Recognizing the Art movement) database, a collection of digitized paintings labelled with respect to the artistic movement. Noting that the set of databases available as benchmarks for evaluation is highly reduced and most existing ones are limited in variability and number of images, we propose a novel large scale dataset of digital paintings. The database consists of more than 7700 images from 12 art movements. Each genre is illustrated by a number of images varying from 250 to nearly 1000. We investigate how local and global features and classification systems are able to recognize the art movement. Our experimental results suggest that accurate recognition is achievable by a combination of various categories.",
"title": ""
},
{
"docid": "5ffcc588301f0f577dfe9621b7420903",
"text": "Video summarization and video captioning are considered two separate tasks in existing studies. For longer videos, automatically identifying the important parts of video content and annotating them with captions will enable a richer and more concise condensation of the video. We propose a general neural network configuration that jointly considers two supervisory signals (i.e., an image-based video summary and text-based video captions) in the training phase and generates both a video summary and corresponding captions for a given video in the test phase. Our main idea is that the summary signals can help a video captioning model learn to focus on important frames. On the other hand, caption signals can help a video summarization model to learn better semantic representations. Jointly modeling both the video summarization and the video captioning tasks offers a novel end-to-end solution that generates a captioned video summary enabling users to index and navigate through the highlights in a video. Moreover, our experiments show the joint model can achieve better performance than state-of-the-art approaches in both individual tasks.",
"title": ""
},
{
"docid": "8d180d1b78fd64168c1808468bc8e032",
"text": "Even great efforts have been made for decades, the recognition of human activities is still an unmature technology that attracted plenty of people in computer vision. In this paper, a system framework is presented to recognize multiple kinds of activities from videos by an SVM multi-class classifier with a binary tree architecture. The framework is composed of three functionally cascaded modules: (a) detecting and locating people by non-parameter background subtraction approach, (b) extracting various of features such as local ones from the minimum bounding boxes of human blobs in each frames and a newly defined global one, contour coding of the motion energy image (CCMEI), and (c) recognizing activities of people by SVM multi-class classifier whose structure is determined by a clustering process. The thought of hierarchical classification is introduced and multiple SVMs are aggregated to accomplish the recognition of actions. Each SVM in the multi-class classifier is trained separately to achieve its best classification performance by choosing proper features before they are aggregated. Experimental results both on a homebrewed activity data set and the public Schüldt’s data set show the perfect identification performance and high robustness of the system. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "10b851c1d0113549764b80434c4bac5e",
"text": "In this paper, a simplified thermal model for variable speed self cooled induction motors is proposed and experimentally verified. The thermal model is based on simple equations that are compared with more complex equations well known in literature. The proposed thermal model allows to predict the over temperature in the main parts of the motor, starting from the measured or the estimated losses in the machine. In the paper the description of the thermal model set up is reported in detail. Finally, the model is used to define the correct power derating for a variable speed PWM induction motor drive.",
"title": ""
},
{
"docid": "7a2e4588826541a1b6d3a493d7601e0c",
"text": "Sports analytics in general, and football (soccer in USA) analytics in particular, have evolved in recent years in an amazing way, thanks to automated or semi-automated sensing technologies that provide high-fidelity data streams extracted from every game. In this paper we propose a data-driven approach and show that there is a large potential to boost the understanding of football team performance. From observational data of football games we extract a set of pass-based performance indicators and summarize them in the H indicator. We observe a strong correlation among the proposed indicator and the success of a team, and therefore perform a simulation on the four major European championships (78 teams, almost 1500 games). The outcome of each game in the championship was replaced by a synthetic outcome (win, loss or draw) based on the performance indicators computed for each team. We found that the final rankings in the simulated championships are very close to the actual rankings in the real championships, and show that teams with high ranking error show extreme values of a defense/attack efficiency measure, the Pezzali score. Our results are surprising given the simplicity of the proposed indicators, suggesting that a complex systems' view on football data has the potential of revealing hidden patterns and behavior of superior quality.",
"title": ""
},
{
"docid": "fe1e97ecf8d86f8610635834506942af",
"text": "The ad hoc network is a system of network elements that combine to form a network requiring little or no planning. This may not be feasible as nodes can enter and leave the network. In such networks, each node can receive the packet (host) and the packet sender (router) to act. The goal of routing is finding paths that meet the needs of the network and effectively use network resources. This paper presents a method for QoS routing in ad hoc networks based on ant colony optimization (ACO) algorithm and fuzzy logic. The advantages of this method flexibility and routing are based on several criteria. The results show that the proposed method in comparison with the algorithm IACA has better performance, higher efficiency and greater throughput. Therefore, the combination of ant algorithm with Fuzzy Logic due to its simplicity fuzzy computing is appropriate for QoS routing.",
"title": ""
},
{
"docid": "82f18b2c38969f556ff4464ecb99f837",
"text": "Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models— plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)—can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models’ ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.",
"title": ""
},
{
"docid": "db7ed2c615bb93c6cec19b65f7b4366d",
"text": "Virtual anthropology consists of the introduction of modern slice imaging to biological and forensic anthropology. Thanks to this non-invasive scientific revolution, some classifications and staging systems, first based on dry bone analysis, can be applied to cadavers with no need for specific preparation, as well as to living persons. Estimation of bone and dental age is one of the possibilities offered by radiology. Biological age can be estimated in clinical forensic medicine as well as in living persons. Virtual anthropology may also help the forensic pathologist to estimate a deceased person’s age at death, which together with sex, geographical origin and stature, is one of the important features determining a biological profile used in reconstructive identification. For this forensic purpose, the radiological tools used are multislice computed tomography and, more recently, X-ray free imaging techniques such as magnetic resonance imaging and ultrasound investigations. We present and discuss the value of these investigations for age estimation in anthropology.",
"title": ""
},
{
"docid": "3caa44b574e8db885ad68169fe2446d8",
"text": "Dengue is a mosquito-borne fever in the southernmost part of India. It is caused by female mosquitoes, grown in stagnant water .The major symptoms for dengue are fever, bleeding, pain behind eyes, abdominal pain, fatigue, loss of appetite etc., Early diagnosis is the most important, in order to save the human from this deadly disease. Classification techniques helps to predict the disease at an early stage. In this research, Bayes belief network is classification technique is used to predict the probability for various disease occurrence using the probability distribution. Keywords— Prediction, Diagnosis, Bayes belief network, Probability distribution, Classification",
"title": ""
},
{
"docid": "a494d6d9c8919ade3590ed7f6cf44451",
"text": "Most algorithms commonly exploited for radar imaging are based on linear models that describe only direct scattering events from the targets in the investigated scene. This assumption is rarely verified in practical scenarios where the objects to be imaged interact with each other and with surrounding environment producing undesired multipath signals. These signals manifest in radar images as “ghosts\" that usually impair the reliable identification of the targets. The recent literature in the field is attempting to provide suitable techniques for multipath suppression from one side and from the other side is focusing on the exploitation of the additional information conveyed by multipath to improve target detection and localization. This work addresses the first problem with a specific focus on multipath ghosts caused by target-to-target interactions. In particular, the study is performed with regard to metallic scatterers by means of the linearized inverse scattering approach based on the physical optics (PO) approximation. A simple model is proposed in the case of point-like targets to gain insight into the ghosts problem so as to devise possible measurement and processing strategies for their mitigation. Finally, the effectiveness of these methods is assessed by reconstruction results obtained from full-wave synthetic data.",
"title": ""
},
{
"docid": "13055a3a35f058eb3622fb60afc436fc",
"text": "AIM\nTo investigate the probability of and factors influencing periapical status of teeth following primary (1°RCTx) or secondary (2°RCTx) root canal treatment.\n\n\nMETHODOLOGY\nThis prospective study involved annual clinical and radiographic follow-up of 1°RCTx (1170 roots, 702 teeth and 534 patients) or 2°RCTx (1314 roots, 750 teeth and 559 patients) carried out by Endodontic postgraduate students for 2-4 (50%) years. Pre-, intra- and postoperative data were collected prospectively on customized forms. The proportion of roots with complete periapical healing was estimated, and prognostic factors were investigated using multiple logistic regression models. Clustering effects within patients were adjusted in all models using robust standard error.\n\n\nRESULTS\nproportion of roots with complete periapical healing after 1°RCTx (83%; 95% CI: 81%, 85%) or 2°RCTx (80%; 95% CI: 78%, 82%) were similar. Eleven prognostic factors were identified. The conditions that were found to improve periapical healing significantly were: the preoperative absence of a periapical lesion (P = 0.003); in presence of a periapical lesion, the smaller its size (P ≤ 0.001), the better the treatment prognosis; the absence of a preoperative sinus tract (P = 0.001); achievement of patency at the canal terminus (P = 0.001); extension of canal cleaning as close as possible to its apical terminus (P = 0.001); the use of ethylene-diamine-tetra-acetic acid (EDTA) solution as a penultimate wash followed by final rinse with NaOCl solution in 2°RCTx cases (P = 0.002); abstaining from using 2% chlorexidine as an adjunct irrigant to NaOCl solution (P = 0.01); absence of tooth/root perforation (P = 0.06); absence of interappointment flare-up (pain or swelling) (P =0.002); absence of root-filling extrusion (P ≤ 0.001); and presence of a satisfactory coronal restoration (P ≤ 0.001).\n\n\nCONCLUSIONS\nSuccess based on periapical health associated with roots following 1°RCTx (83%) or 2°RCTx (80%) was similar, with 10 factors having a common effect on both, whilst the 11th factor 'EDTA as an additional irrigant' had different effects on the two treatments.",
"title": ""
},
{
"docid": "47ad04e8c93d39a500ab79a6d25d32f0",
"text": "OpenGV is a new C++ library for calibrated realtime 3D geometric vision. It unifies both central and non-central absolute and relative camera pose computation algorithms within a single library. Each problem type comes with minimal and non-minimal closed-form solvers, as well as non-linear iterative optimization and robust sample consensus methods. OpenGV therefore contains an unprecedented level of completeness with regard to calibrated geometric vision algorithms, and it is the first library with a dedicated focus on a unified real-time usage of non-central multi-camera systems, which are increasingly popular in robotics and in the automotive industry. This paper introduces OpenGV's flexible interface and abstraction for multi-camera systems, and outlines the performance of all contained algorithms. It is our hope that the introduction of this open-source platform will motivate people to use it and potentially also include more algorithms, which would further contribute to the general accessibility of geometric vision algorithms, and build a common playground for the fair comparison of different solutions.",
"title": ""
}
] |
scidocsrr
|
1e17f3b96ca503494e7d37c3a74fefd5
|
A Revised Comparison of Crossover and Mutation in Genetic Programming
|
[
{
"docid": "54fd1a3a0fc7cfe04f3b7a83611854dc",
"text": "We review the main results obtained in the theory of schemata in genetic programming (GP), emphasizing their strengths and weaknesses. Then we propose a new, simpler definition of the concept of schema for GP, which is closer to the original concept of schema in genetic algorithms (GAs). Along with a new form of crossover, one-point crossover, and point mutation, this concept of schema has been used to derive an improved schema theorem for GP that describes the propagation of schemata from one generation to the next. We discuss this result and show that our schema theorem is the natural counterpart for GP of the schema theorem for GAs, to which it asymptotically converges.",
"title": ""
},
{
"docid": "8e1a65dd8bf9d8a4b67c46a0067ca42d",
"text": "Reading Genetic Programming IE Automatic Discovery ofReusable Programs (GPII) in its entirety is not a task for the weak-willed because the book without appendices is about 650 pages. An entire previous book by the same author [1] is devoted to describing Genetic Programming (GP), while this book is a sequel extolling an extension called Automatically Defined Functions (ADFs). The author, John R. Koza, argues that ADFs can be used in conjunction with GP to improve its efficacy on large problems. \"An automatically defined function (ADF) is a function (i.e., subroutine, procedure, module) that is dynamically evolved during a run of genetic programming and which may be called by a calling program (e.g., a main program) that is simultaneously being evolved\" (p. 1). Dr. Koza recommends adding the ADF technique to the \"GP toolkit.\" The book presents evidence that it is possible to interpret GP with ADFs as performing either a top-down process of problem decomposition or a bottom-up process of representational change to exploit identified regularities. This is stated as Main Point 1. Main Point 2 states that ADFs work by exploiting inherent regularities, symmetries, patterns, modularities, and homogeneities within a problem, though perhaps in ways that are very different from the style of programmers. Main Points 3 to 7 are appropriately qualified statements to the effect that, with a variety of problems, ADFs pay off be-",
"title": ""
},
{
"docid": "3a8be402f75af666076f441c124ac911",
"text": "This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of “building blocks” in GP.",
"title": ""
}
] |
[
{
"docid": "9bec22bcbf1ab3071d65dd8b41d3cf51",
"text": "Omni-directional mobile platforms have the ability to move instantaneously in any direction from any configuration. As such, it is important to have a mathematical model of the platform, especially if the platform is to be used as an autonomous vehicle. Autonomous behaviour requires that the mobile robot choose the optimum vehicle motion in different situations for object/collision avoidance and task achievement. This paper develops and verifies a mathematical model of a mobile robot platform that implements mecanum wheels to achieve omni-directionality. The mathematical model will be used to achieve optimum autonomous control of the developed mobile robot as an office service robot. Omni-directional mobile platforms have improved performance in congested environments and narrow aisles, such as those found in factory workshops, offices, warehouses, hospitals, etc.",
"title": ""
},
{
"docid": "9a03c5ff214a1a41280e6f4b335c87f1",
"text": "In this paper, we present an automatic abstractive summarization system of meeting conversations. Our system extends a novel multi-sentence fusion algorithm in order to generate abstract templates. It also leverages the relationship between summaries and their source meeting transcripts to select the best templates for generating abstractive summaries of meetings. Our manual and automatic evaluation results demonstrate the success of our system in achieving higher scores both in readability and informativeness.",
"title": ""
},
{
"docid": "0252e39c527c3694da09dac7f136c403",
"text": "It is a generally accepted fact that Off-the-shelf OCR engines do not perform well in unconstrained scenarios like natural scene imagery, where text appears among the clutter of the scene. However, recent research demonstrates that a conventional shape-based OCR engine would be able to produce competitive results in the end-to-end scene text recognition task when provided with a conveniently preprocessed image. In this paper we confirm this finding with a set of experiments where two off-the-shelf OCR engines are combined with an open implementation of a state-of-the-art scene text detection framework. The obtained results demonstrate that in such pipeline, conventional OCR solutions still perform competitively compared to other solutions specifically designed for scene text recognition.",
"title": ""
},
{
"docid": "86f7166227e9223978ff779538a1cc79",
"text": "PURPOSE\nThis study was conducted to understand the degree of internet addiction tendency and to find out the factors influencing this addiction tendency among middle school students in Gyeong-buk area.\n\n\nMETHODS\nA total of 450 middle school students in the Daegu and Gyeong-buk area were surveyed in this study. Data collection was conducted through the use of questionnaires.\n\n\nRESULTS\nInternet addiction among middle school students was relatively low (Average user). In the overall ratio distribution, however, students who were classified as either addicted or at risk of addiction accounted for a high percentage, 27%. A positive correlation was found between Internet addiction and Internet expectation, depression and parent control over Internet use. A negative correlation was found between Internet addiction and interpersonal relationship, parent support and self-control. Multiple regression analysis revealed that the most powerful predictor of Internet addiction tendency was depression.\n\n\nCONCLUSION\nThrough the above results, it would be necessary to develop an Internet addiction prevention program for adolescents taking into account for the psychological factors such as depression and Internet use habits. In the future study, the need assessment will be useful for developing this prevention program.",
"title": ""
},
{
"docid": "3b5ef354f7ad216ca0bfcf893352bfce",
"text": "We offer the concept of multicommunicating to describe overlapping conversations, an increasingly common occurrence in the technology-enriched workplace. We define multicommunicating, distinguish it from other behaviors, and develop propositions for future research. Our work extends the literature on technology-stimulated restructuring and reveals one of the opportunities provided by lean media—specifically, an opportunity to multicommunicate. We conclude that the concept of multicommunicating has value both to the scholar and to the practicing manager.",
"title": ""
},
{
"docid": "50eaa44f8e89870750e279118a219d7a",
"text": "Fitbit fitness trackers record sensitive personal information, including daily step counts, heart rate profiles, and locations visited. By design, these devices gather and upload activity data to a cloud service, which provides aggregate statistics to mobile app users. The same principles govern numerous other Internet-of-Things (IoT) services that target different applications. As a market leader, Fitbit has developed perhaps the most secure wearables architecture that guards communication with end-to-end encryption. In this article, we analyze the complete Fitbit ecosystem and, despite the brand's continuous efforts to harden its products, we demonstrate a series of vulnerabilities with potentially severe implications to user privacy and device security. We employ a range of techniques, such as protocol analysis, software decompiling, and both static and dynamic embedded code analysis, to reverse engineer previously undocumented communication semantics, the official smartphone app, and the tracker firmware. Through this interplay and in-depth analysis, we reveal how attackers can exploit the Fitbit protocol to extract private information from victims without leaving a trace, and wirelessly flash malware without user consent. We demonstrate that users can tamper with both the app and firmware to selfishly manipulate records or circumvent Fitbit's walled garden business model, making the case for an independent, user-controlled, and more secure ecosystem. Finally, based on the insights gained, we make specific design recommendations that can not only mitigate the identified vulnerabilities, but are also broadly applicable to securing future wearable system architectures.",
"title": ""
},
{
"docid": "f82eb2d4cc45577f08c7e867bf012816",
"text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.",
"title": ""
},
{
"docid": "0997c292d6518b17991ce95839d9cc78",
"text": "A word's sentiment depends on the domain in which it is used. Computational social science research thus requires sentiment lexicons that are specific to the domains being studied. We combine domain-specific word embeddings with a label propagation framework to induce accurate domain-specific sentiment lexicons using small sets of seed words. We show that our approach achieves state-of-the-art performance on inducing sentiment lexicons from domain-specific corpora and that our purely corpus-based approach outperforms methods that rely on hand-curated resources (e.g., WordNet). Using our framework, we induce and release historical sentiment lexicons for 150 years of English and community-specific sentiment lexicons for 250 online communities from the social media forum Reddit. The historical lexicons we induce show that more than 5% of sentiment-bearing (non-neutral) English words completely switched polarity during the last 150 years, and the community-specific lexicons highlight how sentiment varies drastically between different communities.",
"title": ""
},
{
"docid": "0b941153b9ade732ca52058698643a44",
"text": "In this paper, we prove the complexity bounds for methods of Convex Optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true both for nonsmooth and smooth problems. For the later class, we present also an accelerated scheme with the expected rate of convergence O(n/k), where k is the iteration counter. For Stochastic Optimization, we propose a zero-order scheme and justify its expected rate of convergence O(n/k). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, both for smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.",
"title": ""
},
{
"docid": "c526e32c9c8b62877cb86bc5b097e2cf",
"text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.",
"title": ""
},
{
"docid": "71bc346237c5f97ac245dd7b7bbb497f",
"text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.",
"title": ""
},
{
"docid": "1b6a967402639dd6b3ca7138692fab54",
"text": "Web searchers often exhibit directed search behaviors such as navigating to a particular Website. However, in many circumstances they exhibit different behaviors that involve issuing many queries and visiting many results. In such cases, it is not clear whether the user's rationale is to intentionally explore the results or whether they are struggling to find the information they seek. Being able to disambiguate between these types of long search sessions is important for search engines both in performing retrospective analysis to understand search success, and in developing real-time support to assist searchers. The difficulty of this challenge is amplified since many of the characteristics of exploration (e.g., multiple queries, long duration) are also observed in sessions where people are struggling. In this paper, we analyze struggling and exploring behavior in Web search using log data from a commercial search engine. We first compare and contrast search behaviors along a number dimensions, including query dynamics during the session. We then build classifiers that can accurately distinguish between exploring and struggling sessions using behavioral and topical features. Finally, we show that by considering the struggling/exploring prediction we can more accurately predict search satisfaction.",
"title": ""
},
{
"docid": "381c02fb1ce523ddbdfe3acdde20abf1",
"text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.",
"title": ""
},
{
"docid": "ebb4c6a7f74ca3cede615542bcb0b11b",
"text": "The proposed system of the digitally emulated current mode control for a DC-DC boost converter using the FPGA is implemented by the emulation technique to generate PWM pulse. A reasonable A/D converter with a few MSPS conversion rate is good enough to control the DC-DC converter with 100 kHz switching frequency. It is found the experimental data show the good static and dynamic-response characteristics, which means that the proposed system can be integrated into one chip digital IC for power-source-control with reasonable price.",
"title": ""
},
{
"docid": "15e866c21b0739b7a2e24dc8ee5f1833",
"text": "Plastics have outgrown most man-made materials and have long been under environmental scrutiny. However, robust global information, particularly about their end-of-life fate, is lacking. By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, we present the first global analysis of all mass-produced plastics ever manufactured. We estimate that 8300 million metric tons (Mt) as of virgin plastics have been produced to date. As of 2015, approximately 6300 Mt of plastic waste had been generated, around 9% of which had been recycled, 12% was incinerated, and 79% was accumulated in landfills or the natural environment. If current production and waste management trends continue, roughly 12,000 Mt of plastic waste will be in landfills or in the natural environment by 2050.",
"title": ""
},
{
"docid": "6b5950c88c8cb414a124e74e9bc2ed00",
"text": "As most regular readers of this TRANSACTIONS know, the development of digital signal processing techniques for applications involving image or picture data has been an increasingly active research area for the past decade. Collectively, t h s work is normally characterized under the generic heading “digital image processing.” Interestingly, the two books under review here share this heading as their title. Both are quite ambitious undertakings in that they attempt to integrate contributions from many disciplines (classical systems theory, digital signal processing, computer science, statistical communications, etc.) into unified, comprehensive presentations. In this regard it can be said that both are to some extent successful, although in quite different ways. Why the unusual step of a joint review? A brief overview of the two books reveals that they share not only a common title, but also similar objectives/purposes, intended audiences, structural organizations, and lists of topics considered. A more careful study reveals that substantial differences do exist, however, in the style and depth of subject treatment (as reflected in the difference in their lengths). Given their almost simultaneous publication, it seems appropriate to discuss these similarities/differences in a common setting. After much forethought (and two drafts), the reviewer decided to structure this review by describing the general topical material in their (joint) major sections, with supplementary comments directed toward the individual texts. It is hoped that this will provide the reader with a brief survey of the books’ contents and some flavor of their contrasting approaches. To avoid the identity problems of the joint title, each book will be subsequently referred to using the respective authors’ names: Gonzalez/Wintz and Pratt. Subjects will be correlated with chapter number(s) and approximate l ngth of coverage.",
"title": ""
},
{
"docid": "e0e62a76b1e2875f9aee585603da36ce",
"text": "Article history: Available online 4 August 2012",
"title": ""
},
{
"docid": "891ba8fbdf500605d4752f27d781ef7c",
"text": "In this paper, an evolutionary many-objective optimization algorithm based on corner solution search (MaOEACS) was proposed. MaOEA-CS implicitly contains two phases: the exploitative search for the most important boundary optimal solutions – corner solutions, at the first phase, and the use of angle-based selection [1] with the explorative search for the extension of PF approximation at the second phase. Due to its high efficiency and robustness to the shapes of PFs, it has won the CEC′2017 Competition on Evolutionary Many-Objective Optimization. In addition, MaOEA-CS has also been applied on two real-world engineering optimization problems with very irregular PFs. The experimental results show that MaOEACS outperforms other six state-of-the-art compared algorithms, which indicates it has the ability to handle real-world complex optimization problems with irregular PFs.",
"title": ""
},
{
"docid": "5e95aaa54f8acf073ccc11c08c148fe0",
"text": "Billions of dollars of loss are caused every year due to fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to non stationary distribution of the data, highly imbalanced classes distributions and continuous streams of transactions. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about which is the best strategy to deal with them. In this paper we provide some answers from the practitioner’s perspective by focusing on three crucial issues: unbalancedness, non-stationarity and assessment. The analysis is made possible by a real credit card dataset provided by our industrial partner.",
"title": ""
},
{
"docid": "c95da5ee6fde5cf23b551375ff01e709",
"text": "The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9 % of outdoor and indoor devices, if the latter is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95 % of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.",
"title": ""
}
] |
scidocsrr
|
7d4486def24011ceff09fdaa7607c00c
|
Design and Implementation of Digital dining in Restaurants using Android
|
[
{
"docid": "897efb599e554bf453a7b787c5874d48",
"text": "The Rampant growth of wireless technology and Mobile devices in this era is creating a great impact on our lives. Some early efforts have been made to combine and utilize both of these technologies in advancement of hospitality industry. This research work aims to automate the food ordering process in restaurant and also improve the dining experience of customers. In this paper we discuss about the design & implementation of automated food ordering system with real time customer feedback (AOS-RTF) for restaurants. This system, implements wireless data access to servers. The android application on user’s mobile will have all the menu details. The order details from customer’s mobile are wirelessly updated in central database and subsequently sent to kitchen and cashier respectively. The restaurant owner can manage the menu modifications easily. The wireless application on mobile devices provide a means of convenience, improving efficiency and accuracy for restaurants by saving time, reducing human errors and real-time customer feedback. This system successfully over comes the drawbacks in earlier PDA based food ordering system and is less expensive and more effective than the multi-touchable restaurant management systems.",
"title": ""
}
] |
[
{
"docid": "89e9d32e14da1acd74e23f8cecea5d8e",
"text": "BACKGROUND\nDespite considerable progress in the treatment of post-traumatic stress disorder (PTSD), a large percentage of individuals remain symptomatic following gold-standard therapies. One route to improving care is examining affective disturbances that involve other emotions beyond fear and threat. A growing body of research has implicated shame in PTSD's development and course, although to date no review of this specific literature exists. This scoping review investigated the link between shame and PTSD and sought to identify research gaps.\n\n\nMETHODS\nA systematic database search of PubMed, PsycInfo, Embase, Cochrane, and CINAHL was conducted to find original quantitative research related to shame and PTSD.\n\n\nRESULTS\nForty-seven studies met inclusion criteria. Review found substantial support for an association between shame and PTSD as well as preliminary evidence suggesting its utility as a treatment target. Several design limitations and under-investigated areas were recognized, including the need for a multimodal assessment of shame and more longitudinal and treatment-focused research.\n\n\nCONCLUSION\nThis review provides crucial synthesis of research to date, highlighting the prominence of shame in PTSD, and its likely relevance in successful treatment outcomes. The present review serves as a guide to future work into this critical area of study.",
"title": ""
},
{
"docid": "6eebe30d2e4f7ae4bc1ffb26287f8054",
"text": "Attention mechanisms in sequence to sequence models have shown great ability and wonderful performance in various natural language processing (NLP) tasks, such as sentence embedding, text generation, machine translation, machine reading comprehension, etc. Unfortunately, existing attention mechanisms only learn either high-level or low-level features. In this paper, we think that the lack of hierarchical mechanisms is a bottleneck in improving the performance of the attention mechanisms, and propose a novel Hierarchical Attention Mechanism (Ham) based on the weighted sum of different layers of a multi-level attention. Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation task and a nearly 6.5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our experiments and theorems reveal that Ham has greater generalization and representation ability than existing attention mechanisms.",
"title": ""
},
{
"docid": "4cb41f9de259f18cd8fe52d2f04756a6",
"text": "The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch Postcode Lottery Each week, the Dutch Postcode Lottery (PCL) randomly selects a postal code, and distributes cash and a new BMW to lottery participants in that code. We study the effects of these shocks on lottery winners and their neighbors. Consistent with the life-cycle hypothesis, the effects on winners’ consumption are largely confined to cars and other durables. Consistent with the theory of in-kind transfers, the vast majority of BMW winners liquidate their BMWs. We do, however, detect substantial social effects of lottery winnings: PCL nonparticipants who live next door to winners have significantly higher levels of car consumption than other nonparticipants. JEL Classification: D12, C21",
"title": ""
},
{
"docid": "351ef0cd284fd0f1af8b92dbd51a6e1a",
"text": "The continuously increasing efficiency and power density requirement of the AC-DC front-end converter posed a big challenge for today's power factor correction (PFC) circuit design. The multi-channel interleaved PFC is a promising candidate to achieve the goals. In this paper, the multi-channel interleaving impact on the EMI filter design and the output capacitor life time is investigated. By properly choosing the interleaving channel number and the switching frequency, the EMI filter size and cost can be effectively reduced. Further more; multi-channel PFC with asymmetrical interleaving strategy is introduced, and the additional benefit on the EMI filter is identified. At the output side, different interleaving schemes impact on the output capacitor ripple cancellation effect is also investigated and compared.",
"title": ""
},
{
"docid": "026408a6ad888ea0bcf298a23ef77177",
"text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.",
"title": ""
},
{
"docid": "4259a2252b1065a011655d9f25498b10",
"text": "In this paper we shall prove two results. The first one is of interest in number theory and automorphic forms, while the second is a result in harmonic analysis on p-adic reductive groups. The two results, even though seemingly different, are fairly related by a conjecture of Langlands [13]. To explain the first result let F be a number field and denote by AF its ring of adeles. Given a place v of F, we let Fv denote its completion at v. Let 03C0 be a cusp form on GL2(AF). Write n = Q9v1tv. · For an unramified v, let diag(cxv, 03B2v) denote the diagonal element in GL2(C), the L-group of GL2, attached to 1tv. For a fixed positive integer m, let rm denote the m-th symmetric power representation of the standard representation r, of GL2(C) which is an irreducible (m + 1 )-dimensional representation. Then, for a complex number s, the local Langlands L-function [14] attached to 1tv and rm is",
"title": ""
},
{
"docid": "3e2df9d6ed3cad12fcfda19d62a0b42e",
"text": "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.",
"title": ""
},
{
"docid": "d46916f82e8f6ac8f4f3cb3df1c6875f",
"text": "Mobile devices are becoming the prevalent computing platform for most people. TouchDevelop is a new mobile development environment that enables anyone with a Windows Phone to create new apps directly on the smartphone, without a PC or a traditional keyboard. At the core is a new mobile programming language and editor that was designed with the touchscreen as the only input device in mind. Programs written in TouchDevelop can leverage all phone sensors such as GPS, cameras, accelerometer, gyroscope, and stored personal data such as contacts, songs, pictures. Thousands of programs have already been written and published with TouchDevelop.",
"title": ""
},
{
"docid": "c0ef15616ba357cb522b828e03a5298c",
"text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.",
"title": ""
},
{
"docid": "c99389ad72e35abb651f9002f6053ab3",
"text": "Person re-identification aims to match the images of pedestrians across different camera views from different locations. This is a challenging intelligent video surveillance problem that remains an active area of research due to the need for performance improvement. Person re-identification involves two main steps: feature representation and metric learning. Although the keep it simple and straightforward (KISS) metric learning method for discriminative distance metric learning has been shown to be effective for the person re-identification, the estimation of the inverse of a covariance matrix is unstable and indeed may not exist when the training set is small, resulting in poor performance. Here, we present dual-regularized KISS (DR-KISS) metric learning. By regularizing the two covariance matrices, DR-KISS improves on KISS by reducing overestimation of large eigenvalues of the two estimated covariance matrices and, in doing so, guarantees that the covariance matrix is irreversible. Furthermore, we provide theoretical analyses for supporting the motivations. Specifically, we first prove why the regularization is necessary. Then, we prove that the proposed method is robust for generalization. We conduct extensive experiments on three challenging person re-identification datasets, VIPeR, GRID, and CUHK 01, and show that DR-KISS achieves new state-of-the-art performance.",
"title": ""
},
{
"docid": "67808f54305bc2bb2b3dd666f8b4ef42",
"text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.",
"title": ""
},
{
"docid": "455b2a46ef0a6a032686eaaedf9cacf3",
"text": "Recently, taxonomy has attracted much attention. Both automatic construction solutions and human-based computation approaches have been proposed. The automatic methods suffer from the problem of either low precision or low recall and human computation, on the other hand, is not suitable for large scale tasks. Motivated by the shortcomings of both approaches, we present a hybrid framework, which combines the power of machine-based approaches and human computation (the crowd) to construct a more complete and accurate taxonomy. Specifically, our framework consists of two steps: we first construct a complete but noisy taxonomy automatically, then crowd is introduced to adjust the entity positions in the constructed taxonomy. However, the adjustment is challenging as the budget (money) for asking the crowd is often limited. In our work, we formulate the problem of finding the optimal adjustment as an entity selection optimization (ESO) problem, which is proved to be NP-hard. We then propose an exact algorithm and a more efficient approximation algorithm with an approximation ratio of 1/2(1-1/e). We conduct extensive experiments on real datasets, the results show that our hybrid approach largely improves the recall of the taxonomy with little impairment for precision.",
"title": ""
},
{
"docid": "cdb295a5a98da527a244d9b9f490407e",
"text": "The Toggle-based <italic>X</italic>-masking method requires a single toggle at a given cycle, there is a chance that non-<italic>X</italic> values are also masked. Hence, the non-<italic>X</italic> value over-masking problem may cause a fault coverage degradation. In this paper, a scan chain partitioning scheme is described to alleviate non-<italic>X </italic> bit over-masking problem arising from Toggle-based <italic>X</italic>-Masking method. The scan chain partitioning method finds a scan chain combination that gives the least toggling conflicts. The experimental results show that the amount of over-masked bits is significantly reduced, and it is further reduced when the proposed method is incorporated with <italic>X</italic>-canceling method. However, as the number of scan chain partitions increases, the control data for decoder increases. To reduce a control data overhead, this paper exploits a Huffman coding based data compression. Assuming two partitions, the size of control bits is even smaller than the conventional <italic>X </italic>-toggling method that uses only one decoder. In addition, selection rules of <italic>X</italic>-bits delivered to <italic>X</italic>-Canceling MISR are also proposed. With the selection rules, a significant test time increase can be prevented.",
"title": ""
},
{
"docid": "9dd3157c4c94c62e2577ace7f6c41629",
"text": "BACKGROUND\nThere is a growing concern over the addictiveness of Social Media use. Additional representative indicators of impaired control are needed in order to distinguish presumed social media addiction from normal use.\n\n\nAIMS\n(1) To examine the existence of time distortion during non-social media use tasks that involve social media cues among those who may be considered at-risk for social media addiction. (2) To examine the usefulness of this distortion for at-risk vs. low/no-risk classification.\n\n\nMETHOD\nWe used a task that prevented Facebook use and invoked Facebook reflections (survey on self-control strategies) and subsequently measured estimated vs. actual task completion time. We captured the level of addiction using the Bergen Facebook Addiction Scale in the survey, and we used a common cutoff criterion to classify people as at-risk vs. low/no-risk of Facebook addiction.\n\n\nRESULTS\nThe at-risk group presented significant upward time estimate bias and the low/no-risk group presented significant downward time estimate bias. The bias was positively correlated with Facebook addiction scores. It was efficacious, especially when combined with self-reported estimates of extent of Facebook use, in classifying people to the two categories.\n\n\nCONCLUSIONS\nOur study points to a novel, easy to obtain, and useful marker of at-risk for social media addiction, which may be considered for inclusion in diagnosis tools and procedures.",
"title": ""
},
{
"docid": "187fe997bb78bf60c5aaf935719df867",
"text": "Access to clean, affordable and reliable energy has been a cornerstone of the world's increasing prosperity and economic growth since the beginning of the industrial revolution. Our use of energy in the twenty–first century must also be sustainable. Solar and water–based energy generation, and engineering of microbes to produce biofuels are a few examples of the alternatives. This Perspective puts these opportunities into a larger context by relating them to a number of aspects in the transportation and electricity generation sectors. It also provides a snapshot of the current energy landscape and discusses several research and development opportunities and pathways that could lead to a prosperous, sustainable and secure energy future for the world.",
"title": ""
},
{
"docid": "28370dc894584f053a5bb029142ad587",
"text": "Pharmaceutical parallel trade in the European Union is a large and growing phenomenon, and hope has been expressed that it has the potential to reduce prices paid by health insurance and consumers and substantially to raise overall welfare. In this paper we examine the phenomenon empirically, using data on prices and volumes of individual imported products. We have found that the gains from parallel trade accrue mostly to the distribution chain rather than to health insurance and consumers. This is because in destination countries parallel traded drugs are priced just below originally sourced drugs. We also test to see whether parallel trade has a competition impact on prices in destination countries and find that it does not. Such competition effects as there are in pharmaceuticals come mainly from the presence of generics. Accordingly, instead of a convergence to the bottom in EU pharmaceutical prices, the evidence points at ‘convergence to the top’. This is explained by the fact that drug prices are subjected to regulation in individual countries, and by the limited incentives of purchasers to respond to price differentials.",
"title": ""
},
{
"docid": "c01bb81c729f900ee468dae62738ab09",
"text": "The success of convolutional networks in learning problems involving planar signals such as images is due to their ability to exploit the translation symmetry of the data distribution through weight sharing. Many areas of science and egineering deal with signals with other symmetries, such as rotation invariant data on the sphere. Examples include climate and weather science, astrophysics, and chemistry. In this paper we present spherical convolutional networks. These networks use convolutions on the sphere and rotation group, which results in rotational weight sharing and rotation equivariance. Using a synthetic spherical MNIST dataset, we show that spherical convolutional networks are very effective at dealing with rotationally invariant classification problems.",
"title": ""
},
{
"docid": "ccb5a426e9636186d2819f34b5f0d5e8",
"text": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).",
"title": ""
},
{
"docid": "b19fb7f7471d3565e79dbaab3572bb4d",
"text": "Self-enucleation or oedipism is a specific manifestation of psychiatric illness distinct from the milder forms of self-inflicted ocular injury. In this article, we discuss the previously unreported medical complication of subarachnoid hemorrhage accompanying self-enucleation. The diagnosis was suspected from the patient's history and was confirmed by computed tomographic scan of the head. This complication may be easily missed in the overtly psychotic patient. Specific steps in the medical management of self-enucleation are discussed, and medical complications of self-enucleation are reviewed.",
"title": ""
},
{
"docid": "4446ec55b23ae88192764cffd519afd3",
"text": "We present Inferential Power Analysis (IPA), a new class of attacks based on power analysis. An IPA attack has two stages: a profiling stage and a key extraction stage. In the profiling stage, intratrace differencing, averaging, and other statistical operations are performed on a large number of power traces to learn details of the implementation, leading to the location and identification of key bits. In the key extraction stage, the key is obtained from a very few power traces; we have successfully extracted keys from a single trace. Compared to differential power analysis, IPA has the advantages that the attacker does not need either plaintext or ciphertext, and that, in the key extraction stage, a key can be obtained from a small number of traces.",
"title": ""
}
] |
scidocsrr
|
ef5780662180c36ae5cc1548791e5884
|
Digital Control in Power Electronics
|
[
{
"docid": "1648a759d2487177af4b5d62407fd6cd",
"text": "This paper discusses the presence of steady-state limit cycles in digitally controlled pulse-width modulation (PWM) converters, and suggests conditions on the control law and the quantization resolution for their elimination. It then introduces single-phase and multi-phase controlled digital dither as a means of increasing the effective resolution of digital PWM (DPWM) modules, allowing for the use of low resolution DPWM units in high regulation accuracy applications. Bounds on the number of bits of dither that can be used in a particular converter are derived.",
"title": ""
}
] |
[
{
"docid": "77b84c86b80d3e1c54b2ce4458a0cc52",
"text": "We summarize three evaluations of an educational augmented reality application for geometry education, which have been conducted in 2000, 2003 and 2005 respectively. Repeated formative evaluations with more than 100 students guided the redesign of the application and its user interface throughout the years. We present and discuss the results regarding usability and simulator sickness providing guidelines on how to design augmented reality applications utilizing head-mounted displays.",
"title": ""
},
{
"docid": "84b2dbea13df9e6ee70570a05f82049f",
"text": "The main aim of this position paper is to identify and briefly discuss design-related issues commonly encountered with the implementation of both behaviour change techniques and persuasive design principles in physical activity smartphone applications. These overlapping issues highlight a disconnect in the perspectives held between health scientists' focus on the application of behaviour change theories and components of interventions, and the information systems designers' focus on the application of persuasive design principles as software design features intended to motivate, facilitate and support individuals through the behaviour change process. A review of the current status and some examples of these different perspectives is presented, leading to the identification of the main issues associated with this disconnection. The main behaviour change technique issues identified are concerned with: the fragmented integration of techniques, hindrances in successful use, diversity of user needs and preferences, and the informational flow and presentation. The main persuasive design issues identified are associated with: the fragmented application of persuasive design principles, hindrances in successful usage, diversity of user needs and preferences, informational flow and presentation, the lack of pragmatic guidance for application designers, and the maintenance of immersive user interactions and engagements. Given the common overlap across four of the identified issues, it is concluded that a methodological approach for integrating these two perspectives, and their associated issues, into a consolidated framework is necessary to address the apparent disconnect between these two independently-established, yet complementary fields.",
"title": ""
},
{
"docid": "cc223877e0ac3ca45d9f57e9d83e32d8",
"text": "The type of the workload on a database management system (DBMS) is a key consideration in tuning the system. Allocations for resources such as main memory can be very different depending on whether the workload type is Online Transaction Processing (OLTP) or Decision Support System (DSS). In this paper, we present an approach to automatically identifying a DBMS workload as either OLTP or DSS. We build a classification model based on the most significant workload characteristics that differentiate OLTP from DSS, and then use the model to identify any change in the workload type. We construct a workload classifier from the Browsing and Ordering profiles of the TPC-W benchmark. Experiments with an industry-supplied workload show that our classifier accurately identifies the mix of OLTP and DSS work within an application workload.",
"title": ""
},
{
"docid": "97075bfa0524ad6251cefb2337814f32",
"text": "Reverberation distorts human speech and usually has negative effects on speech intelligibility, especially for hearing-impaired listeners. It also causes performance degradation in automatic speech recognition and speaker identification systems. Therefore, the dereverberation problem must be dealt with in daily listening environments. We propose to use deep neural networks (DNNs) to learn a spectral mapping from the reverberant speech to the anechoic speech. The trained DNN produces the estimated spectral representation of the corresponding anechoic speech. We demonstrate that distortion caused by reverberation is substantially attenuated by the DNN whose outputs can be resynthesized to the dereverebrated speech signal. The proposed approach is simple, and our systematic evaluation shows promising dereverberation results, which are significantly better than those of related systems.",
"title": ""
},
{
"docid": "39bd9645fbe5bb4f7dcd486274710347",
"text": "This paper presents the design of a first-order continuous-time sigma-delta modulator. It can accept input signal bandwidth of 10 kHz with oversampling ratio of 250. The modulator operates at 1.8 V supply voltage and uses 0.18 mum CMOS technology. It achieves a level of 60 dB SNR",
"title": ""
},
{
"docid": "0d0c44dd4fd5b89edc29763ad038540b",
"text": "There is at present limited understanding of the neurobiological basis of the different processes underlying emotion perception. We have aimed to identify potential neural correlates of three processes suggested by appraisalist theories as important for emotion perception: 1) the identification of the emotional significance of a stimulus; 2) the production of an affective state in response to 1; and 3) the regulation of the affective state. In a critical review, we have examined findings from recent animal, human lesion, and functional neuroimaging studies. Findings from these studies indicate that these processes may be dependent upon the functioning of two neural systems: a ventral system, including the amygdala, insula, ventral striatum, and ventral regions of the anterior cingulate gyrus and prefrontal cortex, predominantly important for processes 1 and 2 and automatic regulation of emotional responses; and a dorsal system, including the hippocampus and dorsal regions of anterior cingulate gyrus and prefrontal cortex, predominantly important for process 3. We suggest that the extent to which a stimulus is identified as emotive and is associated with the production of an affective state may be dependent upon levels of activity within these two neural systems.",
"title": ""
},
{
"docid": "a75fabc25204f8bd8030585f1062219a",
"text": "Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.",
"title": ""
},
{
"docid": "bd9878ef264e27321b3e0fe6fe3f25cc",
"text": "There is a wide gap between symbolic reasoning and deep learning. In this research, we explore the possibility of using deep learning to improve symbolic reasoning. Briefly, in a reasoning system, a deep feedforward neural network is used to guide rewriting processes after learning from algebraic reasoning examples produced by humans. To enable the neural network to recognise patterns of algebraic expressions with non-deterministic sizes, reduced partial trees are used to represent the expressions. Also, to represent both top-down and bottom-up information of the expressions, a centralisation technique is used to improve the reduced partial trees. Besides, symbolic association vectors and rule application records are used to improve the rewriting processes. Experimental results reveal that the algebraic reasoning examples can be accurately learnt only if the feedforward neural network has enough hidden layers. Also, the centralisation technique, the symbolic association vectors and the rule application records can reduce error rates of reasoning. In particular, the above approaches have led to 4.6% error rate of reasoning on a dataset of linear equations, differentials and integrals.",
"title": ""
},
{
"docid": "af4106bc4051e01146101aeb58a4261f",
"text": "In recent years a great amount of research has focused on algorithms that learn features from unlabeled data. In this work we propose a model based on the Self-Organizing Map (SOM) neural network to learn features useful for the problem of automatic natural images classification. In particular we use the SOM model to learn single-layer features from the extremely challenging CIFAR-10 dataset, containing 60.000 tiny labeled natural images, and subsequently use these features with a pyramidal histogram encoding to train a linear SVM classifier. Despite the large number of images, the proposed feature learning method requires only few minutes on an entry-level system, however we show that a supervised classifier trained with learned features provides significantly better results than using raw pixels values or other handcrafted features designed specifically for image classification. Moreover, exploiting the topological property of the SOM neural network, it is possible to reduce the number of features and speed up the supervised training process combining topologically close neurons, without repeating the feature learning process.",
"title": ""
},
{
"docid": "2ebf4b32598ba3cd74513f1bab8fe447",
"text": "Anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis is an autoimmune disorder of the central nervous system (CNS). Its immunopathogenesis has been proposed to include early cerebrospinal fluid (CSF) lymphocytosis, subsequent CNS disease restriction and B cell mechanism predominance. There are limited data regarding T cell involvement in the disease. To contribute to the current knowledge, we investigated the complex system of chemokines and cytokines related to B and T cell functions in CSF and sera samples from anti-NMDAR encephalitis patients at different time-points of the disease. One patient in our study group had a long-persisting coma and underwent extraordinary immunosuppressive therapy. Twenty-seven paired CSF/serum samples were collected from nine patients during the follow-up period (median 12 months, range 1–26 months). The patient samples were stratified into three periods after the onset of the first disease symptom and compared with the controls. Modified Rankin score (mRS) defined the clinical status. The concentrations of the chemokines (C-X-C motif ligand (CXCL)10, CXCL8 and C-C motif ligand 2 (CCL2)) and the cytokines (interferon (IFN)γ, interleukin (IL)4, IL7, IL15, IL17A and tumour necrosis factor (TNF)α) were measured with Luminex multiple bead technology. The B cell-activating factor (BAFF) and CXCL13 concentrations were determined via enzyme-linked immunosorbent assay. We correlated the disease period with the mRS, pleocytosis and the levels of all of the investigated chemokines and cytokines. Non-parametric tests were used, a P value <0.05 was considered to be significant. The increased CXCL10 and CXCL13 CSF levels accompanied early-stage disease progression and pleocytosis. The CSF CXCL10 and CXCL13 levels were the highest in the most complicated patient. The CSF BAFF levels remained unchanged through the periods. In contrast, the CSF levels of T cell-related cytokines (INFγ, TNFα and IL17A) and IL15 were slightly increased at all of the periods examined. No dynamic changes in chemokine and cytokine levels were observed in the peripheral blood. Our data support the hypothesis that anti-NMDAR encephalitis is restricted to the CNS and that chemoattraction of immune cells dominates at its early stage. Furthermore, our findings raise the question of whether T cells are involved in this disease.",
"title": ""
},
{
"docid": "5d3f5e6c52b3ccb97fb8a891074d4fb4",
"text": "OBJECTIVE\nThis study investigated the effects of school-based occupational therapy services on students' handwriting.\n\n\nMETHOD\nStudents 7 to 10 years of age with poor handwriting legibility who received direct occupational therapy services (n = 29) were compared with students who did not receive services (n = 9) on handwriting legibility and speed and associated performance components. Visual-motor, visual-perception, in-hand manipulation, and handwriting legibility and speed were measured at the beginning and end of the academic year. The intervention group received a mean of 16.4 sessions and 528 min of direct occupational therapy services during the school year. According to the therapists, visual-motor skills and handwriting practice were emphasized most in intervention.\n\n\nRESULTS\nStudents in the intervention group showed significant increases in in-hand manipulation and position in space scores. They also improved more in handwriting legibility scores than the students in the comparison group. Fifteen students in the intervention group demonstrated greater than 90% legibility at the end of the school year. On average, legibility increased by 14.2% in the students who received services and by 5.8% in the students who did not receive services. Speed increased slightly more in the students who did not receive services.\n\n\nCONCLUSION\nStudents who received occupational therapy services demonstrated improved letter legibility, but speed and numeral legibility did not demonstrate positive intervention effects.",
"title": ""
},
{
"docid": "e315a7e8e83c4130f9a53dec21598ae6",
"text": "Modern techniques for data analysis and machine learning are so called kernel methods. The most famous and successful one is represented by the support vector machine (SVM) for classification or regression tasks. Further examples are kernel principal component analysis for feature extraction or other linear classifiers like the kernel perceptron. The fundamental ingredient in these methods is the choice of a kernel function, which computes a similarity measure between two input objects. For good generalization abilities of a learning algorithm it is indispensable to incorporate problem-specific a-priori knowledge into the learning process. The kernel function is an important element for this. This thesis focusses on a certain kind of a-priori knowledge namely transformation knowledge. This comprises explicit knowledge of pattern variations that do not or only slightly change the pattern’s inherent meaning e.g. rigid movements of 2D/3D objects or transformations like slight stretching, shifting, rotation of characters in optical character recognition etc. Several methods for incorporating such knowledge in kernel functions are presented and investigated. 1. Invariant distance substitution kernels (IDS-kernels): In many practical questions the transformations are implicitly captured by sophisticated distance measures between objects. Examples are nonlinear deformation models between images. Here an explicit parameterization would require an arbitrary number of parameters. Such distances can be incorporated in distanceand inner-product-based kernels. 2. Tangent distance kernels (TD-kernels): Specific instances of IDS-kernels are investigated in more detail as these can be efficiently computed. We assume differentiable transformations of the patterns. Given such knowledge, one can construct linear approximations of the transformation manifolds and use these efficiently for kernel construction by suitable distance functions. 3. Transformation integration kernels (TI-kernels): The technique of integration over transformation groups for feature extraction can be extended to kernel functions and more general group, non-group, discrete or continuous transformations in a suitable way. Theoretically, these approaches differ in the way the transformations are represented and in the adjustability of the transformation extent. More fundamentally, kernels from category 3 turn out to be positive definite, kernels of types 1 and 2 are not positive definite, which is generally required for being usable in kernel methods. This is the",
"title": ""
},
{
"docid": "5a46d347e83aec7624dde84ecdd5302c",
"text": "This paper presents a new algorithm to automatically solve algebra word problems. Our algorithm solves a word problem via analyzing a hypothesis space containing all possible equation systems generated by assigning the numbers in the word problem into a set of equation system templates extracted from the training data. To obtain a robust decision surface, we train a log-linear model to make the margin between the correct assignments and the false ones as large as possible. This results in a quadratic programming (QP) problem which can be efficiently solved. Experimental results show that our algorithm achieves 79.7% accuracy, about 10% higher than the state-of-the-art baseline (Kushman et al., 2014).",
"title": ""
},
{
"docid": "fbcdb3d565519b47922394dc9d84985f",
"text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.",
"title": ""
},
{
"docid": "0a55717b9efe122c8559f34ac858c282",
"text": "Semantic role labeling (SRL) is dedicated to recognizing the predicate-argument structure of a sentence. Previous studies have shown syntactic information has a remarkable contribution to SRL performance. However, such perception was challenged by a few recent neural SRL models which give impressive performance without a syntactic backbone. This paper intends to quantify the importance of syntactic information to dependency SRL in deep learning framework. We propose an enhanced argument labeling model companying with an extended korder argument pruning algorithm for effectively exploiting syntactic information. Our model achieves state-of-the-art results on the CoNLL-2008, 2009 benchmarks for both English and Chinese, showing the quantitative significance of syntax to neural SRL together with a thorough empirical survey over existing models.",
"title": ""
},
{
"docid": "1a422d0eab570642dce8d24cc2e475c7",
"text": "Recommender systems offer critical services in the age of mass information. A good recommender system selects a certain item for a specific user by recognizing why the user might like the item. This awareness implies that the system should model the background of the items and the users. This background modeling for recommendation is tackled through the various models of collaborative filtering with auxiliary information. This paper presents variational approaches for collaborative filtering to deal with auxiliary information. The proposed methods encompass variational autoencoders through augmenting structures to model the auxiliary information and to model the implicit user feedback. This augmentation includes the ladder network and the generative adversarial network to extract the low-dimensional representations influenced by the auxiliary information. These two augmentations are the first trial in the venue of the variational autoencoders, and we demonstrate their significant improvement on the performances in the applications of the collaborative filtering.",
"title": ""
},
{
"docid": "57334078030a2b2d393a7c236d6a3a1c",
"text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.",
"title": ""
},
{
"docid": "fe998d6d18b9bab9ee3a011761aaab50",
"text": "of quartiles for box plots is a well-established convention: boxes or whiskers should never be used to show the mean, s.d. or s.e.m. As with the division of the box by the median, the whiskers are not necessarily symmetrical (Fig. 1b). The 1.5 multiplier corresponds to approximately ±2.7s (where s is s.d.) and 99.3% coverage of the data for a normal distribution. Outliers beyond the whiskers may be individually plotted. Box plot construction requires a sample of at least n = 5 (preferably larger), although some software does not check for this. For n < 5 we recommend showing the individual data points. Sample size differences can be assessed by scaling the box plot width in proportion to √n (Fig. 1b), the factor by which the precision of the sample’s estimate of population statistics improves as sample size is increased. To assist in judging differences between sample medians, a notch (Fig. 1b) can be used to show the 95% confidence interval (CI) for the median, given by m ± 1.58 × IQR/√n (ref. 1). This is an approximation based on the normal distribution and is accurate in large samples for other distributions. If you suspect the population distribution is not close to normal and your sample size is small, avoid interpreting the interval analytically in the way we have described for CI error bars2. In general, when notches do not overlap, the medians can be judged to differ significantly, but overlap does not rule out a significant difference. For small samples the notch may span a larger interval than the box (Fig. 2). The exact position of box boundaries will be software dependent. First, there is no universally agreedupon method to calculate quartile values, which may be based on simple averaging or linear interpolation. Second, some applications, such as R, use hinges instead of quartiles for box boundaries. The lower and upper hinges are the median of the Points of siGnifiCAnCE",
"title": ""
},
{
"docid": "62a98e1699ba7a3621719a7fba55eaac",
"text": "For Industry 4.0 technology as well as Cyber-Physical Production Systems analysis of data gained more and more importance. But the disparity of data, often make the efficient use of data mining methods difficult due to data with poor quality. To evaluate data quality and further adopt appropriate measures, the proposal develops a data quality model fitted to the specific properties of signal data of industrial processes. Relevant data quality characteristics are identified and a classification of these characteristics is conducted to ascertain important factors. Furthermore, a measurement for the characteristic Completeness, aggregated of its sub-dimensions, is defined. The data quality model is applied to two different use cases showing its effectiveness and validity of the defined measures. The efficient use of real industrial signal data e.g. appropriateness of the data for the specific data mining purpose, is supported by a comprehensive measurement for data quality and the detailed discussion of the influencing factors.",
"title": ""
},
{
"docid": "a08d783229b59342cdb015e051450f94",
"text": "We consider the problem of estimating the remaining useful life (RUL) of a system or a machine from sensor data. Many approaches for RUL estimation based on sensor data make assumptions about how machines degrade. Additionally, sensor data from machines is noisy and oen suers from missing values in many practical seings. We propose Embed-RUL: a novel approach for RUL estimation from sensor data that does not rely on any degradation-trend assumptions, is robust to noise, and handles missing values. EmbedRUL utilizes a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to generate embeddings for multivariate time series subsequences. e embeddings for normal and degraded machines tend to be dierent, and are therefore found to be useful for RUL estimation. We show that the embeddings capture the overall paern in the time series while ltering out the noise, so that the embeddings of two machines with similar operational behavior are close to each other, even when their sensor readings have signicant and varying levels of noise content. We perform experiments on publicly available turbofan engine dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL outperforms the previously reported [24] state-of-the-art on several metrics.",
"title": ""
}
] |
scidocsrr
|
2d5219adebf61677c992dc4dbc868503
|
Personality Recognition on Social Media With Label Distribution Learning
|
[
{
"docid": "e27d949155cef2885a4ab93f4fba18b3",
"text": "Because of its richness and availability, micro-blogging has become an ideal platform for conducting psychological research. In this paper, we proposed to predict active users' personality traits through micro-blogging behaviors. 547 Chinese active users of micro-blogging participated in this study. Their personality traits were measured by the Big Five Inventory, and digital records of micro-blogging behaviors were collected via web crawlers. After extracting 839 micro-blogging behavioral features, we first trained classification models utilizing Support Vector Machine (SVM), differentiating participants with high and low scores on each dimension of the Big Five Inventory [corrected]. The classification accuracy ranged from 84% to 92%. We also built regression models utilizing PaceRegression methods, predicting participants' scores on each dimension of the Big Five Inventory. The Pearson correlation coefficients between predicted scores and actual scores ranged from 0.48 to 0.54. Results indicated that active users' personality traits could be predicted by micro-blogging behaviors.",
"title": ""
},
{
"docid": "3a9d639e87d6163c18dd52ef5225b1a6",
"text": "A variety of approaches have been recently proposed to automatically infer users’ personality from their user generated content in social media. Approaches differ in terms of the machine learning algorithms and the feature sets used, type of utilized footprint, and the social media environment used to collect the data. In this paper, we perform a comparative analysis of state-of-the-art computational personality recognition methods on a varied set of social media ground truth data from Facebook, Twitter and YouTube. We answer three questions: (1) Should personality prediction be treated as a multi-label prediction task (i.e., all personality traits of a given user are predicted at once), or should each trait be identified separately? (2) Which predictive features work well across different on-line environments? and (3) What is the decay in accuracy when porting models trained in one social media environment to another?",
"title": ""
},
{
"docid": "fdc4efad14d79f1855dddddb6a30ace6",
"text": "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase 'sick of' and the word 'depressed'), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive 'my' when mentioning their 'wife' or 'girlfriend' more often than females use 'my' with 'husband' or 'boyfriend'). To date, this represents the largest study, by an order of magnitude, of language and personality.",
"title": ""
},
{
"docid": "a60d79008bfb7cccee262667b481d897",
"text": "It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.",
"title": ""
}
] |
[
{
"docid": "ed8f1c0544a6a33d1fdcaf2fd9fc74c6",
"text": "Stress and negative mood during pregnancy increase risk for poor childbirth outcomes and postnatal mood problems and may interfere with mother–infant attachment and child development. However, relatively little research has focused on the efficacy of psychosocial interventions to reduce stress and negative mood during pregnancy. In this study, we developed and pilot tested an eight-week mindfulness-based intervention directed toward reducing stress and improving mood in pregnancy and early postpartum. We then conducted a small randomized trial (n = 31) comparing women who received the intervention during the last half of their pregnancy to a wait-list control group. Measures of perceived stress, positive and negative affect, depressed and anxious mood, and affect regulation were collected prior to, immediately following, and three months after the intervention (postpartum). Mothers who received the intervention showed significantly reduced anxiety (effect size, 0.89; p < 0.05) and negative affect (effect size, 0.83; p < 0.05) during the third trimester in comparison to those who did not receive the intervention. The brief and nonpharmaceutical nature of this intervention makes it a promising candidate for use during pregnancy.",
"title": ""
},
{
"docid": "cd407caad37c33ee5540b079e94782c7",
"text": "Despite the remarkable recent progress, person reidentification (Re-ID) approaches are still suffering from the failure cases where the discriminative body parts are missing. To mitigate such cases, we propose a simple yet effective Horizontal Pyramid Matching (HPM) approach to fully exploit various partial information of a given person, so that correct person candidates can be still identified even even some key parts are missing. Within the HPM, we make the following contributions to produce a more robust feature representation for the Re-ID task: 1) we learn to classify using partial feature representations at different horizontal pyramid scales, which successfully enhance the discriminative capabilities of various person parts; 2) we exploit average and max pooling strategies to account for person-specific discriminative information in a global-local manner. To validate the effectiveness of the proposed HPM, extensive experiments are conducted on three popular benchmarks, including Market-1501, DukeMTMC-ReID and CUHK03. In particular, we achieve mAP scores of 83.1%, 74.5% and 59.7% on these benchmarks, which are the new state-of-the-arts. Our code is available on Github .",
"title": ""
},
{
"docid": "3a1a8f884d85234099a853d64e87ebd3",
"text": "Fault localization, a central aspect of network fault management, is a process of deducing the exact source of a failure from a set of observed failure indications. It has been a focus of research activity since the advent of modern comm unication systems, which produced numerous fault localization techniques. Howeve r, ascommunication systems evolved becoming more complex and offering new capabilities, the requirements imposed on fault localization techniques have changed as well. It is fair to say that despite this research effort, fault localization in complex communication systems remains an open research problem. This paper discusses the challenges of fault localization in complex communication systems and presents an overview of solutions proposed in the course of the last t en years, while discussing their advantages and shortcomings. The survey is followed by the presenta tion of potential directions for future research in this area. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1e5202850748b0f613807b0452eb89a2",
"text": "This paper introduces a hierarchical image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated.",
"title": ""
},
{
"docid": "0efe3ccc1c45121c5167d3792a7fcd25",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "b068cd17374110aab59e2b6a4ae2877d",
"text": "For an autonomous mobile robot, an important task to accomplish while maneuvering in outdoor rugged environments is terrain traversability analyzing. Due to the large variety of terrain, a general representation cannot be obtained a priori. Thus, the ability to determine the traversability based on the vehicle motion information and its environments is necessary, and more likely to enable access to interesting sites while insuring the soundness and stability of the mobile robot. We introduce a novel method which can predict motion information based on extracted image features from outdoor university campus environments, to finally estimate the traversability of terrains. A wheeled mobile robot equipped with an optical sensor and an acceleration sensor was used to conduct experiments.",
"title": ""
},
{
"docid": "1451d5d8729c2e78c8c97e53c44f71a0",
"text": "Inflammation plays a key role in the progression of cardiovascular disease, the leading cause of mortality in ESRD (end-stage renal disease). Over recent years, inflammation has been greatly reduced with treatment, but mortality remains high. The aim of the present study was to assess whether low (<2 pg/ml) circulating levels of IL-6 (interleukin-6) are necessary and sufficient to activate the transcription factor STAT3 (signal transducer and activator of transcription 3) in human hepatocytes, and if this micro-inflammatory state was associated with changes in gene expression of some acute-phase proteins involved in cardiovascular mortality in ESRD. Human hepatocytes were treated for 24 h in the presence and absence of serum fractions from ESRD patients and healthy subjects with different concentrations of IL-6. The specific role of the cytokine was also evaluated by cell experiments with serum containing blocked IL-6. Furthermore, a comparison of the effects of IL-6 from patient serum and rIL-6 (recombinant IL-6) at increasing concentrations was performed. Confocal microscopy and Western blotting demonstrated that STAT3 activation was associated with IL-6 cell-membrane-bound receptor overexpression only in hepatocytes cultured with 1.8 pg/ml serum IL-6. A linear activation of STAT3 and IL-6 receptor expression was also observed after incubation with rIL-6. Treatment of hepatocytes with 1.8 pg/ml serum IL-6 was also associated with a 31.6-fold up-regulation of hepcidin gene expression and a 8.9-fold down-regulation of fetuin-A gene expression. In conclusion, these results demonstrated that low (<2 pg/ml) circulating levels of IL-6, as present in non-inflamed ESRD patients, are sufficient to activate some inflammatory pathways and can differentially regulate hepcidin and fetuin-A gene expression.",
"title": ""
},
{
"docid": "e291f7ada6890ae9db8417b29f35d061",
"text": "This study proposes a new framework for citation content analysis (CCA), for syntactic and semantic analysis of citation content that can be used to better analyze the rich sociocultural context of research behavior. This framework could be considered the next generation of citation analysis. The authors briefly review the history and features of content analysis in traditional social sciences and its previous application in library and information science (LIS). Based on critical discussion of the theoretical necessity of a new method as well as the limits of citation analysis, the nature and purposes of CCA are discussed, and potential procedures to conduct CCA, including principles to identify the reference scope, a two-dimensional (citing and cited) and two-module (syntactic and semantic) codebook, are provided and described. Future work and implications are also suggested.",
"title": ""
},
{
"docid": "757eadf19fee04c91e51ac8e6d3c6de1",
"text": "OBJECTIVES\nInfantile hemangiomas often are inapparent at birth and have a period of rapid growth during early infancy followed by gradual involution. More precise information on growth could help predict short-term outcomes and make decisions about when referral or intervention, if needed, should be initiated. The objective of this study was to describe growth characteristics of infantile hemangioma and compare growth with infantile hemangioma referral patterns.\n\n\nMETHODS\nA prospective cohort study involving 7 tertiary care pediatric dermatology practices was conducted. Growth data were available for a subset of 526 infantile hemangiomas in 433 patients from a cohort study of 1096 children. Inclusion criteria were age younger than 18 months at time of enrollment and presence of at least 1 infantile hemangioma. Growth stage and rate were compared with clinical characteristics and timing of referrals.\n\n\nRESULTS\nEighty percent of hemangioma size was reached during the early proliferative stage at a mean age of 3 months. Differences in growth between hemangioma subtypes included that deep hemangiomas tend to grow later and longer than superficial hemangiomas and that segmental hemangiomas tended to exhibit more continued growth after 3 months of age. The mean age of first visit was 5 months. Factors that predicted need for follow-up included ongoing proliferation, larger size, deep component, and segmental and indeterminate morphologic subtypes.\n\n\nCONCLUSIONS\nMost infantile hemangioma growth occurs before 5 months, yet 5 months was also the mean age at first visit to a specialist. Recognition of growth characteristics and factors that predict the need for follow-up could help aid in clinical decision-making. The first few weeks to months of life are a critical time in hemangioma growth. Infants with hemangiomas need close observation during this period, and those who need specialty care should be referred and seen as early as possible within this critical growth period.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "ce2590b39ef85a1a3e7d5b4914746a62",
"text": "In the smart grid system, an advanced meter infrastructure (AMI) is an integral subsystem mainly used to collect monthly consumption and load profile. Hence, a large amount of information will be exchanged within these systems. Data concentrator unit (DCU) is used to collect the information from smart meters before forwarding to meter data management system. In order to meet the AMI's QoS such as throughput and delay, the optimal placement for DCU has to be thoroughly investigated. This paper aims at developing an optimal location algorithm for the DCU placement in a non-beacon-mode IEEE 802.15.4 smart grid network. The optimization algorithm preliminarily computes the DCU position based on a minimum hop count metric. Nevertheless, it is possible that multiple positions achieving the minimum hop count may be found; therefore, the additional performance metric, i.e. the averaged throughput and delay, will be used to select the ultimately optimal location. In this paper, the maximum throughput with the acceptable averaged delay constraint is proposed by considering the behavior of the AMI meters which is almost stationary in the network. From the simulation results, it is obvious that the proposed methodology is significantly effective.",
"title": ""
},
{
"docid": "7bfb88e2d19ae6bb053674d7f7bcb313",
"text": "This thesis presents a set of machine learning and deep learning approaches for building systems with the goal of source-code plagiarism detection. The task of plagiarism detection can be treated as assessing the amount of similarity presented within given entities. These entities can be anything like documents containing text, source-code etc. Plagiarism detection can be formulated as a fine-grained pattern classification problem. The detection process begins by transforming the entity into feature representations. These features are representatives of their corresponding entities in a discriminative high-dimensional space, where we can measure for similarity. Here, by entity we mean solution to programming assignments in typical computer science courses. The quality of the features determine the quality of detection As our first contribution, we propose a machine learning based approach for plagiarism detection in programming assignments using source-code metrics. Most of the well known plagiarism detectors either employ a text-based approach or use features based on the property of the program at a syntactic level. However, both these approaches succumb to code obfuscation which is a huge obstacle for automatic software plagiarism detection. Our proposed method uses source-code metrics as features, which are extracted from the intermediate representation of a program in a compiler infrastructure such as gcc. We demonstrate the use of unsupervised and supervised learning techniques on the extracted feature representations and show that our system is robust to code obfuscation. We validate our method on assignments from introductory programming course. The preliminary results show that our system is better when compared to other popular tools like MOSS. For visualizing the local and global structure of the features, we obtained the low-dimensional representations of our features using a popular technique called t-SNE, a variation of Stochastic Neighbor Embedding, which can preserve neighborhood identity in low-dimensions. Based on this idea of preserving neighborhood identity, we mine interesting information such as the diversity in student solution approaches to a given problem. The presence of well defined clusters in low-dimensional visualizations demonstrate that our features are capable of capturing interesting programming patterns. As our second contribution, we demonstrate how deep neural networks can be employed to learn features for source-code plagiarism detection. We employ a character-level Recurrent Neural Network (char-RNN), a character-level language model to map the characters in a source-code to continuousvalued vectors called embeddings. We use these program embeddings as deep features for plagiarism",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "fdfb71f5905b2af2c01c6b4d1fe23d7e",
"text": "Many believe the electric power system is undergoing a profound change driven by a number of needs. There's the need for environmental compliance and energy conservation. We need better grid reliability while dealing with an aging infrastructure. And we need improved operational effi ciencies and customer service. The changes that are happening are particularly signifi cant for the electricity distribution grid, where \"blind\" and manual operations, along with the electromechanical components, will need to be transformed into a \"smart grid.\" This transformation will be necessary to meet environmental targets, to accommodate a greater emphasis on demand response (DR), and to support plug-in hybrid electric vehicles (PHEVs) as well as distributed generation and storage capabilities. It is safe to say that these needs and changes present the power industry with the biggest challenge it has ever faced. On one hand, the transition to a smart grid has to be evolutionary to keep the lights on; on the other hand, the issues surrounding the smart grid are signifi cant enough to demand major changes in power systems operating philosophy.",
"title": ""
},
{
"docid": "a6e062620666a4f6e88373d746d4418c",
"text": "A method for fabricating planar implantable microelectrode arrays was demonstrated using a process that relied on ultra-thin silicon substrates, which ranged in thickness from 25 to 50 μm. The challenge of handling these fragile materials was met via a temporary substrate support mechanism. In order to compensate for putative electrical shielding of extracellular neuronal fields, separately addressable electrode arrays were defined on each side of the silicon device. Deep reactive ion etching was employed to create sharp implantable shafts with lengths of up to 5 mm. The devices were flip-chip bonded onto printed circuit boards (PCBs) by means of an anisotropic conductive adhesive film. This scalable assembly technique enabled three-dimensional (3D) integration through formation of stacks of multiple silicon and PCB layers. Simulations and measurements of microelectrode noise appear to suggest that low impedance surfaces, which could be formed by electrodeposition of gold or other materials, are required to ensure an optimal signal-to-noise ratio as well a low level of interchannel crosstalk. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "a5c9de4127df50d495c7372b363691cf",
"text": "This book is an accompaniment to the computer software package mathStatica (which runs as an add-on to Mathematica). The book comes with two CD-ROMS: mathStatica, and a 30-day trial version of Mathematica 4.1. The mathStatica CD-ROM includes an applications pack for doing mathematical statistics, custom Mathematica palettes and an electronic version of the book that is identical to the printed text, but can be used interactively to generate animations of some of the book's figures (e.g. as a parameter is varied). (I found this last feature particularly valuable.) MathStatica has statistical operators for determining expectations (and hence characteristic functions, for example) and probabilities, for finding the distributions of transformations of random variables and generally for dealing with the kinds of problems and questions that arise in mathematical statistics. Applications include estimation, curve-fitting, asymptotics, decision theory and moment conversion formulae (e.g. central to cumulant). To give an idea of the coverage of the book: after an introductory chapter, there are three chapters on random variables, then chapters on systems of distributions (e.g. Pearson), multivariate distributions, moments, asymptotic theory, decision theory and then three chapters on estimation. There is an appendix, which deals with technical Mathematica details. What distinguishes mathStatica from statistical packages such as S-PLUS, R, SPSS and SAS is its ability to deal with the algebraic/symbolic problems that are the main concern of mathematical statistics. This is, of course, because it is based on Mathematica, and this is also the reason that it has a note–book interface (which enables one to incorporate text, equations and pictures into a single line), and why arbitrary-precision calculations can be performed. According to the authors, 'this book can be used as a course text in mathematical statistics or as an accompaniment to a more traditional text'. Assumed knowledge includes preliminary courses in statistics, probability and calculus. The emphasis is on problem solving. The material is supposedly pitched at the same level as Hogg and Craig (1995). However some topics are treated in much more depth than in Hogg and Craig (characteristic functions for instance, which rate less than one page in Hogg and Craig). Also, the coverage is far broader than that of Hogg and Craig; additional topics include for instance stable distributions, cumulants, Pearson families, Gram-Charlier expansions and copulae. Hogg and Craig can be used as a textbook for a third-year course in mathematical statistics in some Australian universities , whereas there is …",
"title": ""
},
{
"docid": "1b7a10807e85018743338c7e59075987",
"text": "We propose a 600 GHz data transmission of high definition television using the combination of a photonic emission using an uni-travelling carrier photodiode and an electronic detection, featuring a very low power at the receiver. Only 10 nW of THz power at 600GHz were sufficient to ensure real-time error-free operation. This combination of photonics at emission and heterodyne detection lead to achieve THz wireless links with a safe level of electromagnetic exposure.",
"title": ""
},
{
"docid": "ab74bef6dce156cd335267109e6fc0bc",
"text": "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.",
"title": ""
}
] |
scidocsrr
|
0c6a149282a8200e82cfeae8c37bed64
|
Alert Correlation for Extracting Attack Strategies
|
[
{
"docid": "ca6ae788fc63563e39e1cb611dbdd8c5",
"text": "STATL is an extensible state/transition-based attack desc ription language designed to support intrusion detection. The language allows one to describe computer pen trations as sequences of actions that an attacker performs to compromise a computer system. A STATL descripti on of an attack scenario can be used by an intrusion detection system to analyze a stream of events and de tect possible ongoing intrusions. Since intrusion detection is performed in different domains (i.e., the netw ork or the hosts) and in different operating environments (e.g., Linux, Solaris, or Windows NT), it is useful to h ave an extensible language that can be easily tailored to different target environments. STATL defines do main-independent features of attack scenarios and provides constructs for extending the language to describe attacks in particular domains and environments. The STATL language has been successfully used in describing both network-based and host-based attacks, and it has been tailored to very different environments, e.g ., Sun Microsystems’ Solaris and Microsoft’s Windows NT. An implementation of the runtime support for the STATL language has been developed and a toolset of intrusion detection systems based on STATL has b een implemented. The toolset was used in a recent intrusion detection evaluation effort, delivering very favorable results. This paper presents the details of the STATL syntax and its semantics. Real examples from bot h the host and network-based extensions of the language are also presented.",
"title": ""
}
] |
[
{
"docid": "84f2bdd2977885acdb2f92e6c34cc705",
"text": "Each forensic case is characterized by its own uniqueness. Deficient forensic cases require additional sources of human identifiers to assure the identity. We report on two different cases illustrating the role of teeth in answering challenging forensic questions. The first case involves identification of an adipocere male found in a car submersed in water for approximately 2 years. The second scenario, which involves paternity DNA testing of an exhumed body, was performed approximately 2.8 years post-mortem. The difficulty in anticipating the degradation of the DNA is one of the main obstacles. DNA profiling of dental tissues, DNA quantification by using real-time PCR (PowerQuant™ System/Promega) and a histological dental examination have been performed to address the encountered impediments of adverse post-mortem changes. Our results demonstrate that despite the adverse environmental conditions, a successful STR profile of DNA isolated from the root of teeth can be generated with respect to tooth type and apportion. We conclude that cementocytes are a fruitful source of DNA. Cementum resists DNA degradation in comparison to other tissues with respect to the intra- and inter-individual variation of histological and anatomical structures.",
"title": ""
},
{
"docid": "fec4b030280f228c2568c4a5eccbac28",
"text": "Distillation columns with a high-purity product (down to 7 ppm) have been studied. A steady state m odel is developed using a commercial process simulator. The model is validated against industrial data. Based on the mod el, three major optimal operational changes are identified. T hese are, lowering the location of the feed & side draw strea ms, increasing the pressure at the top of the distillat ion column and changing the configuration of the products draw. It is estimated that these three changes will increase th e throughput of each column by ~5%. The validated model is also u ed to quantify the effects on key internal column paramet ers such as the flooding factor, in the event of significant ch anges to product purity and throughput. Keywordshigh-purity distillation columns; steady state model, operating condition optimization",
"title": ""
},
{
"docid": "f829820706687c186e998bfed5be9c42",
"text": "As deep learning systems are widely adopted in safetyand securitycritical applications, such as autonomous vehicles, banking systems, etc., malicious faults and attacks become a tremendous concern, which potentially could lead to catastrophic consequences. In this paper, we initiate the first study of leveraging physical fault injection attacks on Deep Neural Networks (DNNs), by using laser injection technique on embedded systems. In particular, our exploratory study targets four widely used activation functions in DNNs development, that are the general main building block of DNNs that creates non-linear behaviors – ReLu, softmax, sigmoid, and tanh. Our results show that by targeting these functions, it is possible to achieve a misclassification by injecting faults into the hidden layer of the network. Such result can have practical implications for realworld applications, where faults can be introduced by simpler means (such as altering the supply voltage).",
"title": ""
},
{
"docid": "d7d0fa6279b356d37c2f64197b3d721d",
"text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.",
"title": ""
},
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "1b22c3d5bb44340fcb66a1b44b391d71",
"text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.",
"title": ""
},
{
"docid": "b79243c0961984c9c839b42ce3e62680",
"text": "With a focus on design methodology for developing a compact and lightweight minimally invasive surgery (MIS) robot manipulator, the goal of this study is progress toward a next-generation surgical robot system that will help surgeons deliver healthcare more effectively. Based on an extensive database of in-vivo surgical measurements, the workspace requirements were clearly defined. The pivot point constraint in MIS makes the spherical manipulator a natural candidate. An experimental evaluation process helped to more clearly understand the application and limitations of the spherical mechanism as an MIS robot manipulator. The best configuration consists of two serial manipulators in order to avoid collision problems. A complete kinematic analysis and optimization incorporating the requirements for MIS was performed to find the optimal link lengths of the manipulator. The results show that for the serial spherical 2-link manipulator used to guide the surgical tool, the optimal link lengths (angles) are (60/spl deg/, 50/spl deg/). A prototype 6-DOF surgical robot has been developed and will be the subject of further study.",
"title": ""
},
{
"docid": "4b3425ce40e46b7a595d389d61daca06",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "635d43789e3cd9fb339f59867ca7ce36",
"text": "Most of previous studies either take product knowle dge as a moderator, or involve varying degree of product knowledge on the consumers’ perceived evalu ation as an influential factor. It is rare to come across research that discusses how both brand image and product knowledge affect purchase intention. Thus, this research has chosen both an intrinsic an d extrinsic product cue ─brand image and product knowledge─as independent variables while using price discount as a moderator and conducted research on purchase intention.",
"title": ""
},
{
"docid": "1516f9d674d911cef4b8d5cd8780afe7",
"text": "This paper describes a novel approach to event-based debugging. The approach is based on a (coarsegrained) dataflow view of events: a high-level event is recognized when an appropriate combination of lower-level events on which it depends has occurred. Event recognition is controlled using familiar programming language constructs. This approach is more flexible and powerful than current ones. It allows arbitrary debugger language commands to be executed when attempting to form higher-level events. It also allows users to specify event recognition in much the same way that they write programs. This paper also describes a prototype, Dalek, that employs the dataflow approach for debugging sequential programs. Dalek demonstrates the feasibility and attractiveness of the dataflow approach. One important motivation for this work is that current sequential debugging tools are inadequate. Dalek contributes toward remedying such inadequacies by providing events and a powerful debugging language. Generalizing the dataflow approach so that it can aid in the debugging of concurrent programs is under investigation.",
"title": ""
},
{
"docid": "8174a4a425dc7f097be101a8461268a0",
"text": "One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.",
"title": ""
},
{
"docid": "ff9c7af613b3b9041321c1a8241b40bf",
"text": "In this letter, a novel compact printed antenna for triple-band WLAN/WiMAX applications is presented. The proposed antenna consists of three simple circular-arc-shaped strips, whose whole geometry looks like “ear” type. By adjusting the geometries and the sizes of these three circular-arc-shaped strips, three different resonance modes can be effectively created for three distinct frequency bands, respectively. The overall dimension of the proposed antenna can reach <formula formulatype=\"inline\"><tex Notation=\"TeX\">$18\\times 37\\times 1\\ {\\hbox{mm}}^{3}$</tex> </formula>. Measured results show that the presented antenna can cover three separated impedance bandwidths of 400 MHz (2.38–2.78 GHz), 480 MHz (3.28–3.76 GHz), and 1000 MHz (4.96–5.96 GHz), which are well applied for both 2.4/5.2/5.8-GHz WLAN bands and 2.5/3.5/5.5-GHz WiMAX bands.",
"title": ""
},
{
"docid": "8100d99d28be7e5ee32a03e34ce3cd14",
"text": "Music artists have composed pieces that are both creative and precise. For example, classical music is well-known for its meticulous structure and emotional effect. Recurrent Neural Networks (RNNs) are powerful models that have achieved excellent performance on difficult learning tasks having temporal dependencies. We propose generative RNN models that create sheet music with well-formed structure and stylistic conventions without predefining music composition rules to the models. We see that Character RNNs are able to learn some patterns but not create structurally accurate music, with a test accuracy of 60% and fooling only upto 35% of the human listeners to believe that the music was created by a human. Generative Adversarial Networks (GANs) were tried, with various training techniques, but produced no meaningful results due to instable training. On the other hand, Seq2Seq models do very well in producing both structurally correct and human-pleasing music, with a test accuracy of 65% and some of its generated music fooling ⇠ 70% of the human listeners.",
"title": ""
},
{
"docid": "71da334a81fa6109e56050895618b348",
"text": "With the rise of the Internet and other open networks, a large number of security protocols have been developed and deployed in order to provide secure communication. The analysis of such security protocols has turned out to be extremely difficult for humans, as witnessed by the fact that many protocols were found to be flawed after deployment. This has driven the research in formal analysis of security protocols. Unfortunately, there are no effective approaches yet for constructing correct and efficient protocols, and work on concise formal logics that might allow one to easily prove that a protocol is correct in a formal model, is still ongoing. The most effective approach so far has been automated falsification or verification of such protocols with state-of-the-art tools such as ProVerif [1] or the Avispa tools [2]. These tools have shown to be effective at finding attacks on protocols (Avispa) or establishing correctness of protocols (ProVerif). In this paper we present a push-button tool, called Scyther, for the verification, the falsification, and the analysis of security protocols. Scyther can be freely downloaded, and provides a number of novel features not offered by other tools, as well as state-of-the-art performance. Novel features include the possibility of unbounded verification with guaranteed termination, analysis of infinite sets of traces in terms of patterns, and support for multi-protocol analysis. Scyther is based on a pattern refinement algorithm, providing concise representations of (infinite) sets of traces. This allows the tool to assist in the analysis of classes of attacks and possible protocol behaviours, or to prove correctness for an unbounded number of protocol sessions. The tool has been successfully applied in both research and teaching.",
"title": ""
},
{
"docid": "77281793a88329ca2cf9fd8eeaf01524",
"text": "This paper describes a new circuit integrated on silicon, which generates temperature-independent bias currents. Such a circuit is firstly employed to obtain a current reference with first-order temperature compensation, then it is modified to obtain second-order temperature compensation. The operation principle of the new circuits is described and the relationships between design and technology process parameters are derived. These circuits have been designed by a 0.35 /spl mu/m BiCMOS technology process and the thermal drift of the reference current has been evaluated by computer simulations. They show good thermal performance and in particular, the new second-order temperature-compensated current reference has a mean temperature drift of only 28 ppm//spl deg/C in the temperature range between -30/spl deg/C and 100/spl deg/C.",
"title": ""
},
{
"docid": "448040bcefe4a67a2a8c4b2cf75e7ebc",
"text": "Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives: data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them.",
"title": ""
},
{
"docid": "cf77d802b84093a2b2bc666bf0c5665e",
"text": "Past research has reported that females use exclamation points more frequently than do males. Such research often characterizes exclamation points as ‘‘markers of excitability,’’ a term that suggests instability and emotional randomness, yet it has not necessarily examined the contexts in which exclamation points appeared for evidence of ‘‘excitability.’’ The present study uses a 16-category coding frame in a content analysis of 200 exclamations posted to two electronic discussion groups serving the library and information science profession. The results indicate that exclamation points rarely function as markers of excitability in these professional forums, but may function as markers of friendly interaction, a finding with implications for understanding gender styles in email and other forms of computer-mediated communication.",
"title": ""
},
{
"docid": "60a92a659fbfe0c81da9a6902e062455",
"text": "Public knowledge of crime and justice is largely derived from the media. This paper examines the influence of media consumption on fear of crime, punitive attitudes and perceived police effectiveness. This research contributes to the literature by expanding knowledge on the relationship between fear of crime and media consumption. This study also contributes to limited research on the media’s influence on punitive attitudes, while providing a much-needed analysis of the relationship between media consumption and satisfaction with the police. Employing OLS regression, the results indicate that respondents who are regular viewers of crime drama are more likely to fear crime. However, the relationship is weak. Furthermore, the results indicate that gender, education, income, age, perceived neighborhood problems and police effectiveness are statistically related to fear of crime. In addition, fear of crime, income, marital status, race, and education are statistically related to punitive attitudes. Finally, age, fear of crime, race, and perceived neighborhood problems are statistically related to perceived police effectiveness.",
"title": ""
},
{
"docid": "0db0761e87cf381b3b214f6cb56e26fc",
"text": "This study explores the geographic dependencies of echo-chamber communication on Twitter during the Brexit referendum campaign. We review the literature on filter bubbles, echo chambers, and polarization to test five hypotheses positing that echo-chamber communication is associated with homophily in the physical world, chiefly the geographic proximity between users advocating sides of the campaign. The results support the hypothesis that echo chambers in the Leave campaign are associated with geographic propinquity, whereas in the Remain campaign the reverse relationship was found. This study presents evidence that geographically proximate social enclaves interact with polarized political discussion where echo-chamber communication is observed. The article concludes with a discussion of these findings and the contribution to research on filter bubbles and echo chambers.",
"title": ""
},
{
"docid": "60a7e9be448a0ac4e25d1eed5b075de9",
"text": "Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.",
"title": ""
}
] |
scidocsrr
|
044965d98a98b3f69de5218a3629a2de
|
Can Natural Language Processing Become Natural Language Coaching?
|
[
{
"docid": "8788f14a2615f3065f4f0656a4a66592",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
}
] |
[
{
"docid": "5ce4d44c4796a8fa506acf02074496f8",
"text": "Focus and scope The focus of the workshop was applications of logic programming, i.e., application problems, in whole or in part, that are solved by using logic programming languages and systems. A particular theme of interest was to explore the ease of development and maintenance, clarity, performance, and tradeoffs among these features, brought about by programming using a logic paradigm. The goal was to help provide directions for future research advances and application development. Real-world problems increasingly involve complex data and logic, making the use of logic programming more and more beneficial for such complex applications. Despite the diverse areas of application, their common underlying requirements are centered around ease of development and maintenance, clarity, performance, integration with other tools, and tradeoffs among these properties. Better understanding of these important principles will help advance logic programming research and lead to benefits for logic programming applications. The workshop was organized around four main areas of application: Enterprise Software, Control Systems, Intelligent Agents, and Deep Analysis. These general areas included topics such as business intelligence, ontology management, text processing, program analysis, model checking, access control, network programming, resource allocation, system optimization, decision making, and policy administration. The issues proposed for discussion included language features, implementation efficiency, tool support and integration, evaluation methods, as well as teaching and training.",
"title": ""
},
{
"docid": "0af670278702a8680401ceeb421a05f2",
"text": "We investigate semisupervised learning (SL) and pool-based active learning (AL) of a classifier for domains with label-scarce (LS) and unknown categories, i.e., defined categories for which there are initially no labeled examples. This scenario manifests, e.g., when a category is rare, or expensive to label. There are several learning issues when there are unknown categories: 1) it is a priori unknown which subset of (possibly many) measured features are needed to discriminate unknown from common classes and 2) label scarcity suggests that overtraining is a concern. Our classifier exploits the inductive bias that an unknown class consists of the subset of the unlabeled pool’s samples that are atypical (relative to the common classes) with respect to certain key (albeit a priori unknown) features and feature interactions. Accordingly, we treat negative log- $p$ -values on raw features as nonnegatively weighted derived feature inputs to our class posterior, with zero weights identifying irrelevant features. Through a hierarchical class posterior, our model accommodates multiple common classes, multiple LS classes, and unknown classes. For learning, we propose a novel semisupervised objective customized for the LS/unknown category scenarios. While several works minimize class decision uncertainty on unlabeled samples, we instead preserve this uncertainty [maximum entropy (maxEnt)] to avoid overtraining. Our experiments on a variety of UCI Machine learning (ML) domains show: 1) the use of $p$ -value features coupled with weight constraints leads to sparse solutions and gives significant improvement over the use of raw features and 2) for LS SL and AL, unlabeled samples are helpful, and should be used to preserve decision uncertainty (maxEnt), rather than to minimize it, especially during the early stages of AL. Our AL system, leveraging a novel sample-selection scheme, discovers unknown classes and discriminates LS classes from common ones, with sparing use of oracle labeling.",
"title": ""
},
{
"docid": "c3dd3dd59afe491fcc6b4cd1e32c88a3",
"text": "The Semantic Web drives towards the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.",
"title": ""
},
{
"docid": "8d0ccf63b21af19cb750eb571fc59ae6",
"text": "This paper presents a motor imagery based Brain Computer Interface (BCI) that uses single channel EEG signal from the C3 or C4 electrode placed in the motor area of the head. Time frequency analysis using Short Time Fourier Transform (STFT) is used to compute spectrogram from the EEG data. The STFT is scaled to have gray level values on which Grey Co-occurrence Matrix (GLCM) is computed. Texture descriptors such as correlation, energy, contrast, homogeneity and dissimilarity are calculated from the GLCM matrices. The texture descriptors are used to train a logistic regression classifier which is then used to classify the left and right motor imagery signals. The single-channel motor imagery classification system is tested offline with different subjects. The average offline accuracy is 87.6%. An online BCI system is implemented in openViBE with the single channel classification scheme. The stimuli presentations and feedback are implemented in Python and integrated with the openViBe BCI system.",
"title": ""
},
{
"docid": "57ccd593f1be27463f9e609d700452dd",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sustainable supply chain network design: An optimization-oriented review Majid Eskandarpour, Pierre Dejax, Joe Miemczyk, Olivier Péton",
"title": ""
},
{
"docid": "d2f36cc750703f5bbec2ea3ef4542902",
"text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …",
"title": ""
},
{
"docid": "de6f4705f2d0f829c90e69c0f03a6b6f",
"text": "This paper investigates the opportunities and challenges in the use of dynamic radio transmit power control for prolonging the lifetime of body-wearable sensor devices used in continuous health monitoring. We first present extensive empirical evidence that the wireless link quality can change rapidly in body area networks, and a fixed transmit power results in either wasted energy (when the link is good) or low reliability (when the link is bad). We quantify the potential gains of dynamic power control in body-worn devices by benchmarking off-line the energy savings achievable for a given level of reliability.We then propose a class of schemes feasible for practical implementation that adapt transmit power in real-time based on feedback information from the receiver. We profile their performance against the offline benchmark, and provide guidelines on how the parameters can be tuned to achieve the desired trade-off between energy savings and reliability within the chosen operating environment. Finally, we implement and profile our scheme on a MicaZ mote based platform, and also report preliminary results from the ultra-low-power integrated healthcare monitoring platform we are developing at Toumaz Technology.",
"title": ""
},
{
"docid": "1350f4e274947881f4562ab6596da6fd",
"text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "5dda89fbe7f5757588b5dff0e6c2565d",
"text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight gures to be more attractive than normal or overweight gures, regardless of WHR. The female gure with the high WHR (0.86) was judged to be more attractive than the gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These ndings lend stronger support to sociocultural rather than evolutionary hypotheses.",
"title": ""
},
{
"docid": "ba7f157187fec26847c10fa772d71665",
"text": "We describe an implementation of the Hopcroft and Tarjan planarity test and em bedding algorithm The program tests the planarity of the input graph and either constructs a combinatorial embedding if the graph is planar or exhibits a Kuratowski subgraph if the graph is non planar",
"title": ""
},
{
"docid": "d8c4e6632f90c3dd864be93db881a382",
"text": "Document understanding techniques such as document clustering and multidocument summarization have been receiving much attention recently. Current document clustering methods usually represent the given collection of documents as a document-term matrix and then conduct the clustering process. Although many of these clustering methods can group the documents effectively, it is still hard for people to capture the meaning of the documents since there is no satisfactory interpretation for each document cluster. A straightforward solution is to first cluster the documents and then summarize each document cluster using summarization methods. However, most of the current summarization methods are solely based on the sentence-term matrix and ignore the context dependence of the sentences. As a result, the generated summaries lack guidance from the document clusters. In this article, we propose a new language model to simultaneously cluster and summarize documents by making use of both the document-term and sentence-term matrices. By utilizing the mutual influence of document clustering and summarization, our method makes; (1) a better document clustering method with more meaningful interpretation; and (2) an effective document summarization method with guidance from document clustering. Experimental results on various document datasets show the effectiveness of our proposed method and the high interpretability of the generated summaries.",
"title": ""
},
{
"docid": "a0b862a758c659b62da2114143bf7687",
"text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.",
"title": ""
},
{
"docid": "ce74305a30bd322a78b3827921ae7224",
"text": "While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE.",
"title": ""
},
{
"docid": "22572c36ce1b816ee30ef422cb290dea",
"text": "Visual context is important in object recognition and it is still an open problem in computer vision. Along with the advent of deep convolutional neural networks (CNN), using contextual information with such systems starts to receive attention in the literature. At the same time, aerial imagery is gaining momentum. While advances in deep learning make good progress in aerial image analysis, this problem still poses many great challenges. Aerial images are often taken under poor lighting conditions and contain low resolution objects, many times occluded by trees or taller buildings. In this domain, in particular, visual context could be of great help, but there are still very few papers that consider context in aerial image understanding. Here we introduce context as a complementary way of recognizing objects. We propose a dual-stream deep neural network model that processes information along two independent pathways, one for local and another for global visual reasoning. The two are later combined in the final layers of processing. Our model learns to combine local object appearance as well as information from the larger scene at the same time and in a complementary way, such that together they form a powerful classifier. We test our dual-stream network on the task of segmentation of buildings and roads in aerial images and obtain state-of-the-art results on the Massachusetts Buildings Dataset. We also introduce two new datasets, for buildings and road segmentation, respectively, and study the relative importance of local appearance vs. the larger scene, as well as their performance in combination. While our local-global model could also be useful in general recognition tasks, we clearly demonstrate the effectiveness of visual context in conjunction with deep nets for aerial image",
"title": ""
},
{
"docid": "9b3db8c2632ad79dc8e20435a81ef2a1",
"text": "Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.",
"title": ""
},
{
"docid": "5ccb3ab32054741928b8b93eea7a9ce2",
"text": "A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given.",
"title": ""
},
{
"docid": "2fa356bb47bf482f8585c882ad5d9409",
"text": "As an important arithmetic module, the adder plays a key role in determining the speed and power consumption of a digital signal processing (DSP) system. The demands of high speed and power efficiency as well as the fault tolerance nature of some applications have promoted the development of approximate adders. This paper reviews current approximate adder designs and provides a comparative evaluation in terms of both error and circuit characteristics. Simulation results show that the equal segmentation adder (ESA) is the most hardware-efficient design, but it has the lowest accuracy in terms of error rate (ER) and mean relative error distance (MRED). The error-tolerant adder type II (ETAII), the speculative carry select adder (SCSA) and the accuracy-configurable approximate adder (ACAA) are equally accurate (provided that the same parameters are used), however ETATII incurs the lowest power-delay-product (PDP) among them. The almost correct adder (ACA) is the most power consuming scheme with a moderate accuracy. The lower-part-OR adder (LOA) is the slowest, but it is highly efficient in power dissipation.",
"title": ""
},
{
"docid": "0eca851ca495916502788c9931d1c1f3",
"text": "Information in various applications is often expressed as character sequences over a finite alphabet (e.g., DNA or protein sequences). In Big Data era, the lengths and sizes of these sequences are growing explosively, leading to grand challenges for the classical NP-hard problem, namely searching for the Multiple Longest Common Subsequences (MLCS) from multiple sequences. In this paper, we first unveil the fact that the state-of-the-art MLCS algorithms are unable to be applied to long and large-scale sequences alignments. To overcome their defects and tackle the longer and large-scale or even big sequences alignments, based on the proposed novel problem-solving model and various strategies, e.g., parallel topological sorting, optimal calculating, reuse of intermediate results, subsection calculation and serialization, etc., we present a novel parallel MLCS algorithm. Exhaustive experiments on the datasets of both synthetic and real-world biological sequences demonstrate that both the time and space of the proposed algorithm are only linear in the number of dominants from aligned sequences, and the proposed algorithm significantly outperforms the state-of-the-art MLCS algorithms, being applicable to longer and large-scale sequences alignments.",
"title": ""
}
] |
scidocsrr
|
6bed2d29343350d14da52fc3d09410bd
|
A Survey of Attribute-based Access Control with User Revocation in Cloud Data Storage
|
[
{
"docid": "d7b0711c45166395689037d21942578d",
"text": "Cipher text-Policy Attribute-Based Proxy Re-Encryption (CP-ABPRE) extends the traditional Proxy Re-Encryption (PRE) by allowing a semi-trusted proxy to transform a cipher text under an access policy to the one with the same plaintext under another access policy (i.e. attribute-based re-encryption). The proxy, however, learns nothing about the underlying plaintext. CP-ABPRE has many real world applications, such as fine-grained access control in cloud storage systems and medical records sharing among different hospitals. Previous CP-ABPRE schemes leave how to be secure against Chosen-Cipher text Attacks (CCA) as an open problem. This paper, for the first time, proposes a new CP-ABPRE to tackle the problem. The new scheme supports attribute-based re-encryption with any monotonic access structures. Despite our scheme is constructed in the random oracle model, it can be proved CCA secure under the decisional q-parallel bilinear Diffie-Hellman exponent assumption.",
"title": ""
}
] |
[
{
"docid": "fdbf20917751369d7ffed07ecedc9722",
"text": "In order to evaluate the effect of static magnetic field (SMF) on morphological and physiological responses of soybean to water stress, plants were grown under well-watered (WW) and water-stress (WS) conditions. The adverse effects of WS given at different growth stages was found on growth, yield, and various physiological attributes, but WS at the flowering stage severely decreased all of above parameters in soybean. The result indicated that SMF pretreatment to the seeds significantly increased the plant growth attributes, biomass accumulation, and photosynthetic performance under both WW and WS conditions. Chlorophyll a fluorescence transient from SMF-treated plants gave a higher fluorescence yield at J–I–P phase. Photosynthetic pigments, efficiency of PSII, performance index based on absorption of light energy, photosynthesis, and nitrate reductase activity were also higher in plants emerged from SMF-pretreated seeds which resulted in an improved yield of soybean. Thus SMF pretreatment mitigated the adverse effects of water stress in soybean.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "0894f7715bfa5b568a734fa44815962d",
"text": "BACKGROUND\nVoluntary medical male circumcision (VMMC) is a priority HIV preventive intervention. To facilitate male circumcision scale- up, the World Health Organization is actively seeking circumcision techniques that are quicker, easier, and safer than open surgical methods.\n\n\nOBJECTIVE\nTo compare conventional open surgical circumcision with suturing with a minimally invasive technique using the Gomco circumcision clamp plus tissue adhesive.\n\n\nMETHODS\nWe conducted a non-blinded randomised controlled trial comprising 200 male volunteers >18 years of age, seen at the outpatient university teaching clinic of the Catholic University of Mozambique. We compared two interventions - open surgical circumcision with suturing v. Gomco instrument plus tissue adhesive. Our primary outcome was intraoperative time and our secondary outcomes included: ease of performance, post-operative pain, adverse events, time to healing, patient satisfaction and cosmetic result.\n\n\nRESULTS\nThe intraoperative time was less with the Gomco/tissue adhesive technique (mean 12.8 min v. 22.5 min; p<0.001). Adverse events were similar except that wound disruption was greater in the Gomco/tissue adhesive group, with no difference in wound healing at 4 weeks. Levels of satisfaction were high in both groups. The cosmetic result was superior in the Gomco/tissue adhesive group.\n\n\nCONCLUSIONS\nThis study has important implications for the scale-up of VMMC services. Removing the foreskin with the Gomco instrument and sealing the wound with cyanoacrylate tissue adhesive in adults is quicker, is an easier technique to learn, and is potentially safer than open surgical VMMC. A disposable plastic, Gomco-like device should be produced and evaluated for use in resource-limited settings.",
"title": ""
},
{
"docid": "c5c2ba17a949f3c5b8b0d098ffdfa11e",
"text": "A longstanding claim in the literature holds that by-phras es are special in the passive, receiving certain external argument roles that by-phrases in nominals cannot, for inst ance the role of experiencer. This paper challenges this lon gsta ding claim and shows that by-phrases are not special in the passiv e: they can receive all of the thematic roles that they can in v erbal passives. They are banned from certain nominals for the same reason they are banned from certain VP types like unaccusati ve and sporadic advancements: by-phrases require the syntact ic and semantic presence of an external argument. By-phrase s c n receive a uniform analysis, whether they occur with verbs or in nominals. The analysis proposed here involves syntactic word formation, with syntactic heads effecting passivization a nd nominalization. It also relies on syntactic selection fo r selectional features, and proposes a theory of such features. The concep tion of grammar that emerges is one without lexical rules, wh ere passivization and nominalization take place in the syntax.",
"title": ""
},
{
"docid": "3c38b800109f75a352d16da2ee35b8bb",
"text": "Recurrent neural networks (RNNs) have been widely used for processing sequential data. However, RNNs are commonly difficult to train due to the well-known gradient vanishing and exploding problems and hard to learn long-term patterns. Long short-term memory (LSTM) and gated recurrent unit (GRU) were developed to address these problems, but the use of hyperbolic tangent and the sigmoid action functions results in gradient decay over layers. Consequently, construction of an efficiently trainable deep network is challenging. In addition, all the neurons in an RNN layer are entangled together and their behaviour is hard to interpret. To address these problems, a new type of RNN, referred to as independently recurrent neural network (IndRNN), is proposed in this paper, where neurons in the same layer are independent of each other and they are connected across layers. We have shown that an IndRNN can be easily regulated to prevent the gradient exploding and vanishing problems while allowing the network to learn long-term dependencies. Moreover, an IndRNN can work with non-saturated activation functions such as relu (rectified linear unit) and be still trained robustly. Multiple IndRNNs can be stacked to construct a network that is deeper than the existing RNNs. Experimental results have shown that the proposed IndRNN is able to process very long sequences (over 5000 time steps), can be used to construct very deep networks (21 layers used in the experiment) and still be trained robustly. Better performances have been achieved on various tasks by using IndRNNs compared with the traditional RNN and LSTM.",
"title": ""
},
{
"docid": "6fd3f4ab064535d38c01f03c0135826f",
"text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.",
"title": ""
},
{
"docid": "b1f5cef9df1f241f57bf4c3d84ce0573",
"text": "AIM AND OBJECTIVES\nTo improve the knowledge and skills of diabetic patients on insulin injections using mobile phone short message services and to evaluate the association of this intervention with metabolic outcomes.\n\n\nBACKGROUND\nMobile communication technologies are widely used in Turkey, which maintains a diabetic population of more than 6·5 million. However, there are a limited number of studies using mobile technologies in the challenging and complicated management of diabetes.\n\n\nDESIGN\nA one group pretest-posttest design was used in this study.\n\n\nMETHODS\nThe study sample consisted of 221 people with type 1 and type 2 Diabetes Mellitus from eight outpatient clinics in six cities in Turkey. The 'Demographic and diabetes-related information Form' and 'Insulin Injection Technique and Knowledge Form' were used in the initial interview. Subsequently, 12 short messages related to insulin administration were sent to patients twice a week for six months. Each patient's level of knowledge and skills regarding both the insulin injection technique and glycaemic control (glycated haemoglobin A1c) levels were measured at three months and six months during the text messaging period and six months later (12 months total) when text messaging was stopped.\n\n\nRESULTS\nThe mean age of the patients with diabetes was 39·8 ± 16·2 years (min: 18; max: 75). More than half of the patients were females with a mean duration of diabetes of 11·01 ± 7·22 years (min 1; max: 32). Following the text message reminders, the patients' level of knowledge and skills regarding the insulin injection technique improved at month 3 and 6 (p < 0·05). The patients' A1c levels statistically significantly decreased at the end of month 3, 6 and 12 compared to the baseline values (p < 0·05). The number of insulin injection sites and the frequency of rotation of skin sites for insulin injections also increased.\n\n\nCONCLUSION\nThis study demonstrated that a short message services-based information and reminder system on insulin injection administration provided to insulin-dependent patients with diabetes by nurses resulted in improved self-administration of insulin and metabolic control.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nToday, with the increased use of mobile communication technologies, it is possible for nurses to facilitate diabetes management by using these technologies. We believe that mobile technologies, which are not only easy to use and to follow-up with by healthcare providers, are associated with positive clinical outcomes for patients and should be more commonly used in the daily practice of diabetes management.",
"title": ""
},
{
"docid": "ac9f71a97f6af0718587ffd0ea92d31d",
"text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889",
"title": ""
},
{
"docid": "bd111864fb4081b79e17ccd517157413",
"text": "We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data. Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a “blind spot” in the receptive field, we address two of its shortcomings: inefficient training and somewhat disappointing final denoising performance. This is achieved through a novel blind-spot convolutional network architecture that allows efficient self-supervised training, as well as application of Bayesian distribution prediction on output colors. Together, they bring the selfsupervised model on par with fully supervised deep learning techniques in terms of both quality and training speed in the case of i.i.d. Gaussian noise.",
"title": ""
},
{
"docid": "59681da45f9e3e466eb38c266ff1e0b8",
"text": "Emotional suppression has been associated with generally negative social consequences (Butler et al., 2003; Gross & John, 2003). A cultural perspective suggests, however, that these consequences may be moderated by cultural values. We tested this hypothesis in a two-part study, and found that, for Americans holding Western-European values, habitual suppression was associated with self-protective goals and negative emotion. In addition, experimentally elicited suppression resulted in reduced interpersonal responsiveness during face-to-face interaction, along with negative partner-perceptions and hostile behavior. These deleterious effects were reduced when individuals with more Asian values suppressed, and these reductions were mediated by cultural differences in the responsiveness of the suppressors. These findings suggest that many of suppression's negative social impacts may be moderated by cultural values.",
"title": ""
},
{
"docid": "0dac38edf20c2a89a9eb46cd1300162c",
"text": "Common software weaknesses, such as improper input validation, integer overflow, can harm system security directly or indirectly, causing adverse effects such as denial-of-service, execution of unauthorized code. Common Weakness Enumeration (CWE) maintains a standard list and classification of common software weakness. Although CWE contains rich information about software weaknesses, including textual descriptions, common sequences and relations between software weaknesses, the current data representation, i.e., hyperlined documents, does not support advanced reasoning tasks on software weaknesses, such as prediction of missing relations and common consequences of CWEs. Such reasoning tasks become critical to managing and analyzing large numbers of common software weaknesses and their relations. In this paper, we propose to represent common software weaknesses and their relations as a knowledge graph, and develop a translation-based, description-embodied knowledge representation learning method to embed both software weaknesses and their relations in the knowledge graph into a semantic vector space. The vector representations (i.e., embeddings) of software weaknesses and their relations can be exploited for knowledge acquisition and inference. We conduct extensive experiments to evaluate the performance of software weakness and relation embeddings in three reasoning tasks, including CWE link prediction, CWE triple classification, and common consequence prediction. Our knowledge graph embedding approach outperforms other description- and/or structure-based representation learning methods.",
"title": ""
},
{
"docid": "6cfc078d0b908cb020417d4503e5bade",
"text": "How does an entrepreneur’s social network impact crowdfunding? Based on social capital theory, we developed a research model and conducted a comparative study using objective data collected from China and the U.S. We found that an entrepreneur’s social network ties, obligations to fund other entrepreneurs, and the shared meaning of the crowdfunding project between the entrepreneur and the sponsors had significant effects on crowdfunding performance in both China and the U.S. The predictive power of the three dimensions of social capital was stronger in China than it was in the U.S. Obligation also had a greater impact in China. 2014 Elsevier B.V. All rights reserved. § This study is supported by the Natural Science Foundation of China (71302186), the Chinese Ministry of Education Humanities and Social Sciences Young Scholar Fund (12YJCZH306), the China National Social Sciences Fund (11AZD077), and the Fundamental Research Funds for the Central Universities (JBK120505). * Corresponding author. Tel.: +1 218 726 7334. E-mail addresses: haichao_zheng@163.com (H. Zheng), dli@d.umn.edu (D. Li), kaitlynwu@swufe.edu.cn (J. Wu), xuyun@swufe.edu.cn (Y. Xu).",
"title": ""
},
{
"docid": "8ce15f6a0d6e5a49dcc2953530bceb19",
"text": "In signal restoration by Bayesian inference, one typically uses a parametric model of the prior distribution of the signal. Here, we consider how the parameters of a prior model should be estimated from observations of uncorrupted signals. A lot of recent work has implicitly assumed that maximum likelihood estimation is the optimal estimation method. Our results imply that this is not the case. We first obtain an objective function that approximates the error occurred in signal restoration due to an imperfect prior model. Next, we show that in an important special case (small gaussian noise), the error is the same as the score-matching objective function, which was previously proposed as an alternative for likelihood based on purely computational considerations. Our analysis thus shows that score matching combines computational simplicity with statistical optimality in signal restoration, providing a viable alternative to maximum likelihood methods. We also show how the method leads to a new intuitive and geometric interpretation of structure inherent in probability distributions.",
"title": ""
},
{
"docid": "0c4a9ee404cec4176e9d0f41c6d73b15",
"text": "A novel envelope detector structure is proposed in this paper that overcomes the traditional trade-off required in these circuits, improving both the tracking and keeping of the signal. The method relies on holding the signal by two capacitors, discharging one when the other is in hold mode and employing the held signals to form the output. Simulation results show a saving greater than 60% of the capacitor area for the same ripple (0.3%) and a release time constant (0.4¿s) much smaller than that obtained by the conventional circuits.",
"title": ""
},
{
"docid": "22d17576fef96e5fcd8ef3dd2fb0cc5f",
"text": "I n a previous article (\" Agile Software Development: The Business of Innovation , \" Computer, Sept. 2001, pp. 120-122), we introduced agile software development through the problem it addresses and the way in which it addresses the problem. Here, we describe the effects of working in an agile style. Over recent decades, while market forces, systems requirements, implementation technology, and project staff were changing at a steadily increasing rate, a different development style showed its advantages over the traditional one. This agile style of development directly addresses the problems of rapid change. A dominant idea in agile development is that the team can be more effective in responding to change if it can • reduce the cost of moving information between people, and • reduce the elapsed time between making a decision to seeing the consequences of that decision. To reduce the cost of moving information between people, the agile team works to • place people physically closer, • replace documents with talking in person and at whiteboards, and • improve the team's amicability—its sense of community and morale— so that people are more inclined to relay valuable information quickly. To reduce the time from decision to feedback, the agile team • makes user experts available to the team or, even better, part of the team and • works incrementally. Making user experts available as part of the team gives developers rapid feedback on the implications to the user of their design choices. The user experts, seeing the growing software in its earliest stages, learn both what the developers misunderstood and also which of their requests do not work as well in practice as they had thought. The term agile, coined by a group of people experienced in developing software this way, has two distinct connotations. The first is the idea that the business and technology worlds have become turbulent , high speed, and uncertain, requiring a process to both create change and respond rapidly to change. The first connotation implies the second one: An agile process requires responsive people and organizations. Agile development focuses on the talents and skills of individuals and molds process to specific people and teams, not the other way around. The most important implication to managers working in the agile manner is that it places more emphasis on people factors in the project: amicability, talent, skill, and communication. These qualities become a primary concern …",
"title": ""
},
{
"docid": "9311198676b2cc5ad31145c53c91134d",
"text": "A novel fractal called Fractal Clover Leaf (FCL) is introduced and shown to have well miniaturization capabilities. The proposed patches are fed by L-shape probe to achieve wide bandwidth operation in PCS band. A numerical parametric study on the proposed antenna is presented. It is found that the antenna can attain more than 72% size reduction as well as 17% impedance bandwidth (VSWR<2), in cost of less gain. It is also shown that impedance matching could be reached by tuning probe parameters. The proposed antenna is suitable for handset applications and tight packed planar phased arrays to achieve lower scan angels than rectangular patches.",
"title": ""
},
{
"docid": "6a5e0e30eb5b7f2efe76e0e58e04ae4a",
"text": "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call “percepts” using Gated-Recurrent-Unit Recurrent Networks (GRUs). Our method relies on percepts that are extracted from all levels of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts, however, can lead to high-dimensionality video representations. To mitigate this effect and control the number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler caption-decoder model and without extra 3D CNN features.",
"title": ""
},
{
"docid": "840463688f36a5fd14efa8a1a35bfb8e",
"text": "In this paper, we propose a new hybrid ant colony optimization (ACO) algorithm for feature selection (FS), called ACOFS, using a neural network. A key aspect of this algorithm is the selection of a subset of salient features of reduced size. ACOFS uses a hybrid search technique that combines the advantages of wrapper and filter approaches. In order to facilitate such a hybrid search, we designed new sets of rules for pheromone update and heuristic information measurement. On the other hand, the ants are guided in correct directions while constructing graph (subset) paths using a bounded scheme in each and every step in the algorithm. The above combinations ultimately not only provide an effective balance between exploration and exploitation of ants in the search, but also intensify the global search capability of ACO for a highquality solution in FS. We evaluate the performance of ACOFS on eight benchmark classification datasets and one gene expression dataset, which have dimensions varying from 9 to 2000. Extensive experiments were conducted to ascertain how AOCFS works in FS tasks. We also compared the performance of ACOFS with the results obtained from seven existing well-known FS algorithms. The comparison details show that ACOFS has a remarkable ability to generate reduced-size subsets of salient features while yielding significant classification accuracy. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "18288c42186b7fec24a5884454e69989",
"text": "This article addresses the problem of multichannel audio source separation. We propose a framework where deep neural networks (DNNs) are used to model the source spectra and combined with the classical multichannel Gaussian model to exploit the spatial information. The parameters are estimated in an iterative expectation-maximization (EM) fashion and used to derive a multichannel Wiener filter. We present an extensive experimental study to show the impact of different design choices on the performance of the proposed technique. We consider different cost functions for the training of DNNs, namely the probabilistically motivated Itakura-Saito divergence, and also Kullback-Leibler, Cauchy, mean squared error, and phase-sensitive cost functions. We also study the number of EM iterations and the use of multiple DNNs, where each DNN aims to improve the spectra estimated by the preceding EM iteration. Finally, we present its application to a speech enhancement problem. The experimental results show the benefit of the proposed multichannel approach over a single-channel DNN-based approach and the conventional multichannel nonnegative matrix factorization-based iterative EM algorithm.",
"title": ""
},
{
"docid": "a2a8228b27b066fca497ddc2fa8b323e",
"text": "Digital Image Processing has found to be useful in many domains. In sports, it can either be used as an analytical tool to determine strategic instances in a game or can be used in the broadcast of video to television viewers. Modern day coverage of sports involves multiple cameras and an array of technologies to support it, since manually going through every video coming to a station would be a near-impossible task, a wide range of Digital Image Processing algorithms are applied to do the same. Highlight Generation and Event Detection are the foremost areas in sports where a multitude of DIP algorithms exist. This study provides an insight into the applications of Digital Image Processing in Sports, concentrating on algorithms related to video broadcast while listing their advantages and drawbacks.",
"title": ""
}
] |
scidocsrr
|
00085f74479e0291c7171f31c1dfec36
|
Cyclic Prefix-Based Universal Filtered Multicarrier System and Performance Analysis
|
[
{
"docid": "3d85e6ee7867fa453fb0fd33cffcaad8",
"text": "Cognitive radio has been an active research area in wireless communications over the past 10 years. TV Digital Switch Over resulted in new regulatory regimes, which offer the first large-scale opportunity for cognitive radio and networks. This article considers the most recent regulatory rules for TV White Space opportunistic usage, and proposes technologies to operate in these bands. It addresses techniques to assess channel vacancy by the cognitive radio, focusing on the two incumbent systems of the TV bands, namely TV stations and wireless microphones. Spectrum-sensing performance is discussed under TV White Space regulation parameters. Then, modulation schemes for the opportunistic radio are discussed, showing the limitations of classical multi-carrier techniques and the advantages of filter bank modulations. In particular, the low adjacent band leakage of filter bank is addressed, and its benefit for spectrum pooling is stressed as a means to offer broadband access through channel aggregation.",
"title": ""
}
] |
[
{
"docid": "34401a7e137cffe44f67e6267f29aa57",
"text": "Future Point-of-Care (PoC) molecular-level diagnosis requires advanced biosensing systems that can achieve high sensitivity and portability at low power consumption levels, all within a low price-tag for a variety of applications such as in-field medical diagnostics, epidemic disease control, biohazard detection, and forensic analysis. Magnetically labeled biosensors are proposed as a promising candidate to potentially eliminate or augment the optical instruments used by conventional fluorescence-based sensors. However, magnetic biosensors developed thus far require externally generated magnetic biasing fields [1–4] and/or exotic post-fabrication processes [1,2]. This limits the ultimate form-factor of the system, total power consumption, and cost. To address these impediments, we present a low-power scalable frequency-shift magnetic particle biosensor array in bulk CMOS, which provides single-bead detection sensitivity without any (electrical or permanent) external magnets.",
"title": ""
},
{
"docid": "d40a1b72029bdc8e00737ef84fdf5681",
"text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.",
"title": ""
},
{
"docid": "e808606994c3fd8eea1b78e8a3e55b8c",
"text": "We describe a Japanese-English patent parallel corpus created from the Japanese and US patent data provided for the NTCIR-6 patent retrieval task. The corpus contains about 2 million sentence pairs that were aligned automatically. This is the largest Japanese-English parallel corpus, which will be available to the public after the 7th NTCIR workshop meeting. We estimated that about 97% of the sentence pairs were correct alignments and about 90% of the alignments were adequate translations whose English sentences reflected almost perfectly the contents of the corresponding Japanese sentences.",
"title": ""
},
{
"docid": "d05e4998114dd485a3027f2809277512",
"text": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.",
"title": ""
},
{
"docid": "d614eb429aa62e7d568acbba8ac7fe68",
"text": "Four women, who previously had undergone multiple unsuccessful in vitro fertilisation (IVF) cycles because of failure of implantation of good quality embryos, were identified as having coexisting uterine adenomyosis. Endometrial biopsies showed that adenomyosis was associated with a prominent aggregation of macrophages within the superficial endometrial glands, potentially interfering with embryo implantation. The inactivation of adenomyosis by an ultra-long pituitary downregulation regime promptly resulted in successful pregnancy for all women in this case series.",
"title": ""
},
{
"docid": "76f2d6cd240d2070bfa7f67b03344075",
"text": "Objective and automatic sensor systems to monitor ingestive behavior of individuals arise as a potential solution to replace inaccurate method of self-report. This paper presents a simple sensor system and related signal processing and pattern recognition methodologies to detect periods of food intake based on non-invasive monitoring of chewing. A piezoelectric strain gauge sensor was used to capture movement of the lower jaw from 20 volunteers during periods of quiet sitting, talking and food consumption. These signals were segmented into non-overlapping epochs of fixed length and processed to extract a set of 250 time and frequency domain features for each epoch. A forward feature selection procedure was implemented to choose the most relevant features, identifying from 4 to 11 features most critical for food intake detection. Support vector machine classifiers were trained to create food intake detection models. Twenty-fold cross-validation demonstrated per-epoch classification accuracy of 80.98% and a fine time resolution of 30 s. The simplicity of the chewing strain sensor may result in a less intrusive and simpler way to detect food intake. The proposed methodology could lead to the development of a wearable sensor system to assess eating behaviors of individuals.",
"title": ""
},
{
"docid": "f1b48ea0f93578de8bbe083057211753",
"text": "Anecdotes from creative eminences suggest that executive control plays an important role in creativity, but scientific evidence is sparse. Invoking the Dual Pathway to Creativity Model, the authors hypothesize that working memory capacity (WMC) relates to creative performance because it enables persistent, focused, and systematic combining of elements and possibilities (persistence). Study 1 indeed showed that under cognitive load, participants performed worse on a creative insight task. Study 2 revealed positive associations between time-on-task and creativity among individuals high but not low in WMC, even after controlling for general intelligence. Study 3 revealed that across trials, semiprofessional cellists performed increasingly more creative improvisations when they had high rather than low WMC. Study 4 showed that WMC predicts original ideation because it allows persistent (rather than flexible) processing. The authors conclude that WMC benefits creativity because it enables the individual to maintain attention focused on the task and prevents undesirable mind wandering.",
"title": ""
},
{
"docid": "44abac09424c717f3a691e4ba2640c1a",
"text": "In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.",
"title": ""
},
{
"docid": "ada7b43edc18b321c57a978d7a3859ae",
"text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.",
"title": ""
},
{
"docid": "5c92db9bd23e5081a6a15419aa78abca",
"text": "The original k-means algorithm is designed to work primarily on numeric data sets. This prohibits the algorithm from being applied to categorical data clustering, which is an integral part of data mining and has attracted much attention recently. The k-modes algorithm extended the k-means paradigm to cluster categorical data by using a frequency-based method to update the cluster modes versus the k-means fashion of minimizing a numerically valued cost. However, the dissimilarity measure used in k-modes doesn’t consider the relative frequencies of attribute values in each cluster mode, this will result in a weaker intra-cluster similarity by allocating less similar objects to the cluster. In this paper, we present an experimental study on applying a new dissimilarity measure to the k-modes clustering to improve its clustering accuracy. The measure is based on the idea that the similarity between a data object and cluster mode, is directly proportional to the sum of relative frequencies of the common values in mode. Experimental results on real life datasets show that, the modified algorithm is superior to the original kmodes algorithm with respect to clustering accuracy.",
"title": ""
},
{
"docid": "9734cfaecfbd54f968291e9154e2ab3d",
"text": "The Modbus protocol and its variants are widely used in industrial control applications, especially for pipeline operations in the oil and gas sector. This paper describes the principal attacks on the Modbus Serial and Modbus TCP protocols and presents the corresponding attack taxonomies. The attacks are summarized according to their threat categories, targets and impact on control system assets. The attack taxonomies facilitate formal risk analysis efforts by clarifying the nature and scope of the security threats on Modbus control systems and networks. Also, they provide insights into potential mitigation strategies and the relative costs and benefits of implementing these strategies. c © 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ab0541d9ec1ea0cf7ad85d685267c142",
"text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.",
"title": ""
},
{
"docid": "91d3008dcd6c351d6cc0187c59cad8df",
"text": "Peer-to-peer markets such as eBay, Uber, and Airbnb allow small suppliers to compete with traditional providers of goods or services. We view the primary function of these markets as making it easy for buyers to
nd sellers and engage in convenient, trustworthy transactions. We discuss elements of market design that make this possible, including search and matching algorithms, pricing, and reputation systems. We then develop a simple model of how these markets enable entry by small or exible suppliers, and the resulting impact on existing
rms. Finally, we consider the regulation of peer-to-peer markets, and the economic arguments for di¤erent approaches to licensing and certi
cation, data and employment regulation. We appreciate support from the National Science Foundation, the Stanford Institute for Economic Policy Research, the Toulouse Network on Information Technology, and the Alfred P. Sloan Foundation. yEinav and Levin: Department of Economics, Stanford University and NBER. Farronato: Harvard Business School. Email: leinav@stanford.edu, chiarafarronato@gmail.com, jdlevin@stanford.edu.",
"title": ""
},
{
"docid": "7788cf06b7c9f09013bd15607e11cd79",
"text": "Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.",
"title": ""
},
{
"docid": "b0950aaea13e1eaf13a17d64feddf9b0",
"text": "In this paper, we describe the development of CiteSpace as an integrated environment for identifying and tracking thematic trends in scientific literature. The goal is to simplify the process of finding not only highly cited clusters of scientific articles, but also pivotal points and trails that are likely to characterize fundamental transitions of a knowledge domain as a whole. The trails of an advancing research field are captured through a sequence of snapshots of its intellectual structure over time in the form of Pathfinder networks. These networks are subsequently merged with a localized pruning algorithm. Pivotal points in the merged network are algorithmically identified and visualized using the betweenness centrality metric. An example of finding clinical evidence associated with reducing risks of heart diseases is included to illustrate how CiteSpace could be used. The contribution of the work is its integration of various change detection algorithms and interactive visualization capabilities to simply users' tasks.",
"title": ""
},
{
"docid": "f87fea9cd76d1545c34f8e813347146e",
"text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.",
"title": ""
},
{
"docid": "16156f3f821fe6d65c8a753995f50b18",
"text": "Memory over commitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, over commiting memory can also cause noticeable application performance degradation. We present Ginkgo, a policy framework for over omitting memory in an informed and automated fashion. By directly correlating application-level performance to memory, Ginkgo automates the redistribution of scarce memory across all virtual machines, satisfying performance and capacity constraints. Ginkgo also achieves memory gains for traditionally fixed-size Java applications by coordinating the redistribution of available memory with the activities of the Java Virtual Machine heap. When compared to a non-over commited system, Ginkgo runs the Day Trader 2.0 and SPEC Web 2009 benchmarks with the same number of virtual machines while saving up to 73% (50% omitting free space) of a physical server's memory while keeping application performance degradation within 7%.",
"title": ""
},
{
"docid": "10baebc8e9a0071cbe73d66ccaec3a50",
"text": "In this paper, the switched-capacitor concept is extended to the voltage-doubler discontinuous conduction mode SEPIC rectifier. As a result, a set of single-phase hybrid SEPIC power factor correction rectifiers able to provide lower voltage stress on the semiconductors and/or higher static gain, which can be easily increased with additional switched-capacitor cells, is proposed. Hence, these rectifiers could be employed in applications that require higher output voltage. In addition, the converters provide a high power factor and a reduced total harmonic distortion in the input current. The topology employs a three-state switch, and three different implementations are described, two being bridgeless versions, which can provide gains in relation to efficiency. The structures and the topological states, a theoretical analysis in steady state, a dynamic model for control, and a design example are reported herein. Furthermore, a prototype with specifications of 1000-W output power, 220-V input voltage, 800-V output voltage, and 50-kHz switching frequency was designed in order to verify the theoretical analysis.",
"title": ""
},
{
"docid": "c75095680818ccc7094e4d53815ef475",
"text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.",
"title": ""
},
{
"docid": "bd94b129fdb45adf5d31f2b59cf66867",
"text": "Systems based on Brain Computer Interface (BCI) have been developed from the past three decades for assisting locked-in state patients. Researchers across the globe are developing new techniques to increase the BCI accuracy. In 1924 Dr. Hans Berger recorded the first EEG signal. The number of experimental measurements of brain activity has been done using human control commands. The main function of BCI is to convert and transmit human intentions into appropriate motion commands for the wheelchairs, robots, devices, and so forth. BCI allows improving the quality of life of disabled patients and letting them interact with their environment. Since the BCI signals are non-stationary, the main challenges in the non-invasive BCI system are to accurately detect and classify the signals. This paper reviews the State of Art of BCI and techniques used for feature extraction and classification using electroencephalogram (EEG) and highlights the need of adaptation concept.",
"title": ""
}
] |
scidocsrr
|
ebd766147dffbed15b1f0b8d4eade80c
|
Misinformation spreading on Facebook
|
[
{
"docid": "facc1845ddde1957b2c1b74a62d74261",
"text": "The large availability of user provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. In spite of the enthusiastic rhetoric about the so called collective intelligence unsubstantiated rumors and conspiracy theories-e.g., chemtrails, reptilians or the Illuminati-are pervasive in online social networks (OSN). In this work we study, on a sample of 1.2 million of individuals, how information related to very distinct narratives-i.e. main stream scientific and conspiracy news-are consumed and shape communities on Facebook. Our results show that polarized communities emerge around distinct types of contents and usual consumers of conspiracy news result to be more focused and self-contained on their specific contents. To test potential biases induced by the continued exposure to unsubstantiated rumors on users' content selection, we conclude our analysis measuring how users respond to 4,709 troll information-i.e. parodistic and sarcastic imitation of conspiracy theories. We find that 77.92% of likes and 80.86% of comments are from users usually interacting with conspiracy stories.",
"title": ""
},
{
"docid": "2997fc35a86646d8a43c16217fc8079b",
"text": "During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or “tweets”). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting. In this paper, we present a semi-supervised ranking model for scoring tweets according to their credibility. This model is used in TweetCred , a real-time system that assigns a credibility score to tweets in a user’s timeline. TweetCred , available as a browser plug-in, was installed and used by 1,127 Twitter users within a span of three months. During this period, the credibility score for about 5.4 million tweets was computed, allowing us to evaluate TweetCred in terms of response time, effectiveness and usability. To the best of our knowledge, this is the first research work to develop a real-time system for credibility on Twitter, and to evaluate it on a user base of this size.",
"title": ""
}
] |
[
{
"docid": "63872db3bc792911d10d28ecf39ae79e",
"text": "Stock market prediction has always been one of the hottest topics in research, as well as a great challenge due to its complex and volatile nature. However, most of the existing methods neglect the impact from mass media that will greatly affect the behavior of investors. In this paper we present a system that combines the information from both related news releases and technical indicators to enhance the predictability of the daily stock price trends. The performance shows that this system can achieve higher accuracy and return than a single source system.",
"title": ""
},
{
"docid": "71e786ccfc57ad62e90dd4a7b85cbedd",
"text": "Studies addressing behavioral functions of dopamine (DA) in the nucleus accumbens septi (NAS) are reviewed. A role of NAS DA in reward has long been suggested. However, some investigators have questioned the role of NAS DA in rewarding effects because of its role in aversive contexts. As findings supporting the role of NAS DA in mediating aversively motivated behaviors accumulate, it is necessary to accommodate such data for understanding the role of NAS DA in behavior. The aim of the present paper is to provide a unifying interpretation that can account for the functions of NAS DA in a variety of behavioral contexts: (1) its role in appetitive behavioral arousal, (2) its role as a facilitator as well as an inducer of reward processes, and (3) its presently undefined role in aversive contexts. The present analysis suggests that NAS DA plays an important role in sensorimotor integrations that facilitate flexible approach responses. Flexible approach responses are contrasted with fixed instrumental approach responses (habits), which may involve the nigro-striatal DA system more than the meso-accumbens DA system. Functional properties of NAS DA transmission are considered in two stages: unconditioned behavioral invigoration effects and incentive learning effects. (1) When organisms are presented with salient stimuli (e.g., novel stimuli and incentive stimuli), NAS DA is released and invigorates flexible approach responses (invigoration effects). (2) When proximal exteroceptive receptors are stimulated by unconditioned stimuli, NAS DA is released and enables stimulus representations to acquire incentive properties within specific environmental context. It is important to make a distinction that NAS DA is a critical component for the conditional formation of incentive representations but not the retrieval of incentive stimuli or behavioral expressions based on over-learned incentive responses (i.e., habits). Nor is NAS DA essential for the cognitive perception of environmental stimuli. Therefore, even without normal NAS DA transmission, the habit response system still allows animals to perform instrumental responses given that the tasks take place in fixed environment. Such a role of NAS DA as an incentive-property constructor is not limited to appetitive contexts but also aversive contexts. This dual action of NAS DA in invigoration and incentive learning may explain the rewarding effects of NAS DA as well as other effects of NAS DA in a variety of contexts including avoidance and unconditioned/conditioned increases in open-field locomotor activity. Particularly, the present hypothesis offers the following interpretation for the finding that both conditioned and unconditioned aversive stimuli stimulate DA release in the NAS: NAS DA invigorates approach responses toward 'safety'. Moreover, NAS DA modulates incentive properties of the environment so that organisms emit approach responses toward 'safety' (i.e., avoidance responses) when animals later encounter similar environmental contexts. There may be no obligatory relationship between NAS DA release and positive subjective effects, even though these systems probably interact with other brain systems which can mediate such effects. The present conceptual framework may be valuable in understanding the dynamic interplay of NAS DA neurochemistry and behavior, both normal and pathophysiological.",
"title": ""
},
{
"docid": "b2e1e52ac125951fac36f51ecc770ddc",
"text": "Pulse oximeters are central to the move toward wearable health monitoring devices and medical electronics either hosted by, e.g., smart phones or physically embedded in their design. This paper presents a small, low-cost pulse oximeter design appropriate for wearable and surface-based applications that also produces quality, unfiltered photo-plethysmograms (PPGs) ideal for emerging diagnostic algorithms. The design's “filter-free” embodiment, which employs only digital baseline subtraction as a signal compensation mechanism, distinguishes it from conventional pulse oximeters that incorporate filters for signal extraction and noise reduction. This results in high-fidelity PPGs with thousands of peak-to-peak digitization levels that are sampled at 240 Hz to avoid noise aliasing. Electronic feedback controls make these PPGs more resilient in the face of environmental changes (e.g., the device can operate in full room light), and data stream in real time across either a ZigBee wireless link or a wired USB connection to a host. On-board flash memory is available for store-and-forward applications. This sensor has demonstrated an ability to gather high-integrity data at fingertip, wrist, earlobe, palm, and temple locations from a group of 48 subjects (20 to 64 years old).",
"title": ""
},
{
"docid": "c940cfa3a74cce2aed59640975b4b80d",
"text": "A novel ultra-wideband bandpass filter (BPF) is presented using a back-to-back microstrip-to-coplanar waveguide (CPW) transition employed as the broadband balun structure in this letter. The proposed BPF is based on the electromagnetic coupling between open-circuited microstrip line and short-circuited CPW. The equivalent circuit of half of the filter is used to calculate the input impedance. The broadband microstip-to-CPW transition is designed at the center frequency of 6.85 GHz. The simulated and measured results are shown in this letter.",
"title": ""
},
{
"docid": "f032d36e081d2b5a4b0408b8f9b77954",
"text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.",
"title": ""
},
{
"docid": "00e60176eca7d86261c614196849a946",
"text": "This paper proposes a novel low-profile dual polarized antenna for 2.4 GHz application. The proposed antenna consists of a circular patch with four curved T-stubs and a differential feeding network. Due to the parasitic loading of the curved T-stubs, the bandwidth has been improved. Good impedance matching and dual-polarization with low cross polarization have been achieved within 2.4–2.5 GHz, which is sufficient for WLAN application. The total thickness of the antenna is only 0.031A,o, which is low-profile when compared with its counterparts.",
"title": ""
},
{
"docid": "b231f2c6b19d5c38b8aa99ec1b1e43da",
"text": "Many models of social network formation implicitly assume that network properties are static in steady-state. In contrast, actual social networks are highly dynamic: allegiances and collaborations expire and may or may not be renewed at a later date. Moreover, empirical studies show that human social networks are dynamic at the individual level but static at the global level: individuals’ degree rankings change considerably over time, whereas network-level metrics such as network diameter and clustering coefficient are relatively stable. There have been some attempts to explain these properties of empirical social networks using agent-based models in which agents play social dilemma games with their immediate neighbours, but can also manipulate their network connections to strategic advantage. However, such models cannot straightforwardly account for reciprocal behaviour based on reputation scores (“indirect reciprocity”), which is known to play an important role in many economic interactions. In order to account for indirect reciprocity, we model the network in a bottom-up fashion: the network emerges from the low-level interactions between agents. By so doing we are able to simultaneously account for the effect of both direct reciprocity (e.g. “tit-for-tat”) as well as indirect reciprocity (helping strangers in order to increase one’s reputation). This leads to a strategic equilibrium in the frequencies with which strategies are adopted in the population as a whole, but intermittent cycling over different strategies at the level of individual agents, which in turn gives rise to social networks which are dynamic at the individual level but stable at the network level.",
"title": ""
},
{
"docid": "ba8289f0730dae415c4ff3af57a41d4e",
"text": "This paper is Part 2 of a four-part series of our research on the development of a general framework for error analysis in measurement-based geographic information systems (MBGIS). In this paper, we discuss the problem of point-in-polygon analysis under randomness, i.e., with random measurement error (ME). It is well known that overlay is one of the most important operations in GIS, and point-in-polygon analysis is a basic class of overlay and query problems. Though it is a classic problem, it has, however, not been addressed appropriately. With ME in the location of the vertices of a polygon, the resulting random polygons may undergo complex changes, so that the point-in-polygon problem may become theoretically and practically ill-defined. That is, there is a possibility that we cannot answer whether a random point is inside a random polygon if the polygon is not simple and cannot form a region. For the point-in-triangle problem, however, such a case need not be considered since any triangle can always forms its an interior or region. To formulate the general point-in-polygon problem in a suitable way, a conditional probability mechanism is first introduced in order to accurately characterize the nature of the problem and establish the basis for further analysis. For the point-in-triangle problem, four quadratic forms in the joint coordinate vectors of a point and the vertices of the triangle are constructed. The probability model for the point-in-triangle problem is then established by the identification of signs of these quadratic form variables. Our basic idea for solving a general point-in-polygon (concave or convex) problem is to convert it into several point-in-triangle A general framework for error analysis in measurement-based GIS ------Part 2 2 problems under a certain condition. By solving each point-in-triangle problem and summing the solutionsm up, the probability model for a general point-in-polygon analysis is constructed. The simplicity of the algebra-based approach is that from using these quadratic forms, we can circumvent the complex geometrical relations between a random point and a random polygon (convex or concave) that one has to deal with in any geometric methods when the probability is computed. The theoretical arguments are substantiated by simulation experiments.",
"title": ""
},
{
"docid": "ff3c50ecbd71b7c2ce6e4207dae73b3b",
"text": "Information has emerged as an agent of integration and the enabler of new competitiveness for today’s enterprise in the global marketplace. However, has the paradigm of strategic planning changed sufficiently to support the new role of information systems and technology? We reviewed the literature for commonly used or representative information planning methodologies and found that a new approach is needed. There are six methodologies reviewed in this paper. They all tend to regard planning as a separate stage which does not connect structurally and directly to the information systems development. An integration of planning with development and management through enterprise information resources which capture and characterize the enterprise will shorten the response cycle and even allow for economic evaluation of information system investment.",
"title": ""
},
{
"docid": "bb2dfa72aab4eddb6d86427ea6684162",
"text": "While inductive and deductive reasoning are considered distinct logical and psychological processes, little is known about their respective neural basis. To address this issue we scanned 16 subjects with fMRI, using an event-related design, while they engaged in inductive and deductive reasoning tasks. Both types of reasoning were characterized by activation of left lateral prefrontal and bilateral dorsal frontal, parietal, and occipital cortices. Neural responses unique to each type of reasoning determined from the Reasoning Type (deduction and induction) by Task (reasoning and baseline) interaction indicated greater involvement of left inferior frontal gyrus (BA 44) in deduction than induction, while left dorsolateral (BA 8/9) prefrontal gyrus showed greater activity during induction than deduction. This pattern suggests a dissociation within prefrontal cortex for deductive and inductive reasoning.",
"title": ""
},
{
"docid": "68e742741d513b92cc14ab96f69a1393",
"text": "In the industry of integrated circuits, defect patterns shown on a wafer map contain crucial information for quality engineers to find the cause of defect to increase yield. This paper proposes a method for wafer defect pattern recognition which could recognize more than one defect patterns based on Ordering Point to Identify the Cluster Structure(OPTICS) and Support Vector Machine(SVM). The effectiveness of the proposed method has been verified from following three aspects from a real-world data set of wafer maps(WM-811K): salient defect pattern recognition accuracy up to 94.3% and the accuracy of some types has an obvious improvement, multi-patterns recognition accuracy(82.0%), and computation time has a significantly reduction.",
"title": ""
},
{
"docid": "9def5ba1b4b262b8eb71123023c00e36",
"text": "OBJECTIVE\nThe primary objective of this study was to compare clinically and radiographically the efficacy of autologous platelet rich fibrin (PRF) and autogenous bone graft (ABG) obtained using bone scrapper in the treatment of intrabony periodontal defects.\n\n\nMATERIALS AND METHODS\nThirty-eight intrabony defects (IBDs) were treated with either open flap debridement (OFD) with PRF or OFD with ABG. Clinical parameters were recorded at baseline and 6 months postoperatively. The defect-fill and defect resolution at baseline and 6 months were calculated radiographically (intraoral periapical radiographs [IOPA] and orthopantomogram [OPG]).\n\n\nRESULTS\nSignificant probing pocket depth (PPD) reduction, clinical attachment level (CAL) gain, defect fill and defect resolution at both PRF and ABG treated sites with OFD was observed. However, inter-group comparison was non-significant (P > 0.05). The bivariate correlation results revealed that any of the two radiographic techniques (IOPA and OPG) can be used for analysis of the regenerative therapy in IBDs.\n\n\nCONCLUSION\nThe use of either PRF or ABG were effective in the treatment of three wall IBDs with an uneventful healing of the sites.",
"title": ""
},
{
"docid": "1153287a3a5cde9f6bbacb83dffecdf3",
"text": "This communication deals with the design of a <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$ </tex-math></inline-formula> slot array antenna fed by inverted microstrip gap waveguide (IMGW). The whole structure designed in this communication consists of radiating slots, a groove gap cavity layer, a distribution feeding network, and a transition from standard WR-15 waveguide to the IMGW. First, a <inline-formula> <tex-math notation=\"LaTeX\">$2\\times 2$ </tex-math></inline-formula> cavity-backed slot subarray is designed with periodic boundary condition to achieve good performances of radiation pattern and directivity. Then, a complete IMGW feeding network with a transition from WR-15 rectangular waveguide to the IMGW has been realized to excite the radiating slots. The complete antenna array is designed at 60-GHz frequency band and fabricated using Electrical Discharging Machining Technology. The measurements show that the antenna has a 16.95% bandwidth covering 54–64-GHz frequency range. The measured gain of the antenna is more than 28 dBi with the efficiency higher than 40% covering 54–64-GHz frequency range.",
"title": ""
},
{
"docid": "5ba72505e19ded19685f43559868bfdf",
"text": "In this paper, we present an optimally-modi#ed log-spectral amplitude (OM-LSA) speech estimator and a minima controlled recursive averaging (MCRA) noise estimation approach for robust speech enhancement. The spectral gain function, which minimizes the mean-square error of the log-spectra, is obtained as a weighted geometric mean of the hypothetical gains associated with the speech presence uncertainty. The noise estimate is given by averaging past spectral power values, using a smoothing parameter that is adjusted by the speech presence probability in subbands. We introduce two distinct speech presence probability functions, one for estimating the speech and one for controlling the adaptation of the noise spectrum. The former is based on the time–frequency distribution of the a priori signal-to-noise ratio. The latter is determined by the ratio between the local energy of the noisy signal and its minimum within a speci6ed time window. Objective and subjective evaluation under various environmental conditions con6rm the superiority of the OM-LSA and MCRA estimators. Excellent noise suppression is achieved, while retaining weak speech components and avoiding the musical residual noise phenomena. ? 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7df97a9d3ae19fce1c86322c1f5ac929",
"text": "This study examined the effects of background music on test performance. In a repeated-measures design 30 undergraduates completed two cognitive tests, one in silence and the other with background music. Analysis suggested that music facilitated cognitive performance compared with the control condition of no music: more questions were completed and more answers were correct. There was no difference in heart rate under the two conditions. The improved performance under the music condition might be directly related to the type of music used.",
"title": ""
},
{
"docid": "fa69a8a67ab695fd74e3bfc25206c94c",
"text": "Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.",
"title": ""
},
{
"docid": "79e565a569d72836e089067e35b5844c",
"text": "The author has tried to show, in detail and with precision. just how the global regularities with which biology deals can be envisaged as structures within a many-dimensioned space. He not only has shown how such ideas as chreods, the epigenetic landscape, and switching points, which previously were expressed only in the unsophisticated language of biology, can be formulated more adequately in terms such as vector fields, attractors, catastrophes. and the like; going much further than this. he develops many highly original ideas, both strictly mathematical ones within the field of topology, and applications of these to very many aspects of biology and of other sciences. It would be quite wrong to give the impression that Thorn's book is exclusively devoted to biology, The subjects mentioned in his title, Structural Stability and Morphoge.esis, have a much wider reference: and he relates his topological system of thought to physical and indeed to general philosophical problems. In biology, Thorn not only uses topological modes of thought to provide formal definitions of concepts and a logical framework by which they can be related; he also makes a bold attempt at a direct comparison between topological structures within four-dimensional space-time, such as catastrophe hypersurfaces, and the physical structures found in developing embryos. The basic importance of this book is the introduction, in a massive and thorough way, of topological thinking as a framework for theoretical biology.",
"title": ""
},
{
"docid": "08025e6ed1ee71596bdc087bfd646eac",
"text": "A method is presented for computing an orthonormal set of eigenvectors for the discrete Fourier transform (DFT). The technique is based on a detailed analysis of the eigenstructure of a special matrix which commutes with the DFT. It is also shown how fractional powers of the DFT can be efficiently computed, and possible applications to multiplexing and transform coding are suggested. T",
"title": ""
},
{
"docid": "3b78223f5d11a56dc89a472daf23ca49",
"text": "Shadow maps provide a fast and convenient method of identifying shadows in scenes but can introduce aliasing. This paper introduces the Adaptive Shadow Map (ASM) as a solution to this problem. An ASM removes aliasing by resolving pixel size mismatches between the eye view and the light source view. It achieves this goal by storing the light source view (i.e., the shadow map for the light source) as a hierarchical grid structure as opposed to the conventional flat structure. As pixels are transformed from the eye view to the light source view, the ASM is refined to create higher-resolution pieces of the shadow map when needed. This is done by evaluating the contributions of shadow map pixels to the overall image quality. The improvement process is view-driven, progressive, and confined to a user-specifiable memory footprint. We show that ASMs enable dramatic improvements in shadow quality while maintaining interactive rates.",
"title": ""
},
{
"docid": "0222814440107fe89c13a790a6a3833e",
"text": "This paper presents a third method of generation and detection of a single-sideband signal. The method is basically different from either the conventional filter or phasing method in that no sharp cutoff filters or wide-band 90° phase-difference networks are needed. This system is especially suited to keeping the signal energy confined to the desired bandwidth. Any unwanted sideband occupies the same band as the desired sideband, and the unwanted sideband in the usual sense is not present.",
"title": ""
}
] |
scidocsrr
|
5877646674f3165c41648b30ffb64a01
|
Social capital : Prospects for a new concept
|
[
{
"docid": "3130e666076d119983ac77c5d77d0aed",
"text": "of Ph.D. dissertation, University of Haifa, Israel.",
"title": ""
}
] |
[
{
"docid": "7daf5ad71bda51eacc68f0a1482c3e7e",
"text": "Nearly every modern mobile device includes two cameras. With advances in technology the resolution of these sensors has constantly increased. While this development provides great convenience for users, for example with video-telephony or as dedicated camera replacement, the security implications of including high resolution cameras on such devices has yet to be considered in greater detail. With this paper we demonstrate that an attacker may abuse the cameras in modern smartphones to extract valuable information from a victim. First, we consider exploiting a front-facing camera to capture a user’s keystrokes. By observing facial reflections, it is possible to capture user input with the camera. Subsequently, individual keystrokes can be extracted from the images acquired with the camera. Furthermore, we demonstrate that these cameras can be used by an attacker to extract and forge the fingerprints of a victim. This enables an attacker to perform a wide range of malicious actions, including authentication bypass on modern biometric systems and falsely implicating a person by planting fingerprints in a crime scene. Finally, we introduce several mitigation strategies for the identified threats.",
"title": ""
},
{
"docid": "d4e22e73965bcd9fdb1628711d6beb44",
"text": "This project is designed to measure heart beat (pulse count), by using embedded technology. In this project simultaneously it can measure and monitor the patient’s condition. This project describes the design of a simple, low-cost controller based wireless patient monitoring system. Heart rate of the patient is measured from the thumb finger using IRD (Infra Red Device sensor).Pulse counting sensor is arranged to check whether the heart rate is normal or not. So that a SMS is sent to the mobile number using GSM module interfaced to the controller in case of abnormal condition. A buzzer alert is also given. The heart rate can be measured by monitoring one's pulse using specialized medical devices such as an electrocardiograph (ECG), portable device e.g. The patient heart beat monitoring systems is one of the major wrist strap watch, or any other commercial heart rate monitors which normally consisting of a chest strap with electrodes. Despite of its accuracy, somehow it is costly, involve many clinical settings and patient must be attended by medical experts for continuous monitoring.",
"title": ""
},
{
"docid": "9643afa619093422114a1449b1bf6b76",
"text": "In this paper we describe the adaptation of a supervised classification system that was originally developed to detect sentiment on Twitter texts written in English. The Columbia University team adapted this system to participate in Task 1 of the 4th edition of the experimental evaluation workshop for sentiment analysis focused on the Spanish language (TASS 2015). The task consists of determining the global polarity of a group of messages written in Spanish using the social media platform Twitter.",
"title": ""
},
{
"docid": "17833f9cf4eec06dbc4d7954b6cc6f3f",
"text": "Automated vehicles rely on the accurate and robust detection of the drivable area, often classified into free space, road area and lane information. Most current approaches use monocular or stereo cameras to detect these. However, LiDAR sensors are becoming more common and offer unique properties for road area detection such as precision and robustness to weather conditions. We therefore propose two approaches for a pixel-wise semantic binary segmentation of the road area based on a modified U-Net Fully Convolutional Network (FCN) architecture. The first approach UView-Cam employs a single camera image, whereas the second approach UGrid-Fused incorporates a early fusion of LiDAR and camera data into a multi-dimensional occupation grid representation as FCN input. The fusion of camera and LiDAR allows for efficient and robust leverage of individual sensor properties in a single FCN. For the training of UView-Cam, multiple publicly available datasets of street environments are used, while the UGrid-Fused is trained with the KITTI dataset. In the KITTI Road/Lane Detection benchmark, the proposed networks reach a MaxF score of 94.23% and 93.81% respectively. Both approaches achieve realtime performance with a detection rate of about 10 Hz.",
"title": ""
},
{
"docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "0db28b5ec56259c8f92f6cc04d4c2601",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "831836deb75aacb54513004daa92e1bf",
"text": "Jean Watson introduced The Theory of Human Caring over thirty years ago to the nursing profession. In the theory it is stated that caring is the essence of nursing and that professional nurses have an obligation to provide the best environment for healing to take place. The theory’s carative factors outlines principles and ideas that should be used by the professional nurse to create the best environment for healing of the patient and of the nurse. This paper will describe and critique Jean Watson’s Theory of Human Caring and discuss how this model has influenced nursing practice. REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 3 Reflections on Jean Watson's Theory of Human Caring Florence Nightingale helped define the role of the nurse over one hundred and fifty years ago. Even so, nursing has struggled to find an identity apart from medicine. For years nursing theorists have examined how nursing is unique from medicine. While it was obvious that nursing was a different art than medicine, there was not any scholarly work to illustrate the difference. During the 1950’s nursing began building a body of knowledge, which interpreted and conceptualized the intricacies of nursing. Over the next several decades, nurse theorists rapidly grew the discipline’s foundation. One of the concepts that emerged was nursing as caring. Several theorists have identified caring as being central to nursing; however, Watson’s Theory of Human Caring offers a unique perspective. The theory blends the beliefs and ideas from Eastern and Western cultures to create a spiritual philosophy that can be used throughout nursing practice. This paper will describe and critique Jean Watson’s Theory of Human Caring and discuss how this model has influenced nursing practice. Introduction to the Theory The Theory of Human Caring evolved from Jean Watson’s own desire to develop a deeper understanding of the meaning of humanity and life. She was also greatly influenced by her background in philosophy, psychology and nursing science. Watson’s first book Nursing: The Philosophy and Science of Caring (1979) was developed to bring a “new meaning and dignity” to nursing care (Watson, 2008). The first book introduced carative factors, which are the foundation of Watson’s Theory of Human Caring. The carative factors offered a holistic perspective to caring for a patient, juxtaposed to the reductionist, biophysical model that was prevalent at the time. Watson believed that without incorporating the carative factors, a nurse REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 4 was only performing tasks when treating a patient and not offering professional nursing care (Watson, 2008). In Watson’s second book, Nursing: Human Science and Human Care, A Theory of Nursing (1985), she discusses the philosophical and spiritual components of the Theory of Human Caring, as well as expands upon the definition and implications of the transpersonal moment. The second book redefines caring as a combination of scientific actions, consciousness and intentionality, as well as defines the transcendental phenomenology of a transpersonal caring occasion and expands upon the idea of human-to-human connection. Watson’s third book, Postmodern Nursing and Beyond (1999), focuses on the evolution of the consciousness of the clinician. The third book reinforces the ideas of the first two books and further evolves several concepts to include the spiritual realm, the energetic realm, the interconnectedness to all things and the higher power. The philosophy behind each book and the Theory of Human Caring is that all human beings are connected to each other and to a divine spirit or higher power. Furthermore, each interaction between human beings, but specifically between nurses and patients, should be entered into with the intention of connecting with the patient’s spirit or higher source. Each moment or each act can and should not only facilitate healing in the patient and the nurse, but also transcend both space and time. The components of Watson’s theories include the 10 carative factors, the caritas process, the transpersonal caring relationship, caring moments and caring/healing modalities. Carative factors are the essential characteristics needed by the professional nurse to establish a therapeutic relationship and promote healing. Carative factors are the core of Watson’s philosophy and they are (i) formation of a humanistic-altruistic systems of values, (ii) instillation of faith-hope, (iii) cultivation of sensitivity to one’s self and to others, (iv) development of a helping-trusting REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 5 human caring relationship, (v) promotion and acceptance of the expression of positive and negative feelings, (vi) systematic use of a creative problem solving and caring process, (vii) promotion of transpersonal teaching-learning, (viii) provision for supportive, protective, and/or corrective mental, physical, societal and spiritual environment, (ix) assistance with gratification of human needs and (x) allowance for existential-phenomenological-spiritual forces. Carative factors are intended to provide a foundation for the discipline of nursing that is developed from understanding and compassion. Watson’s caritas processes are the expansion of the original carative factors and are reflective of Watson’s own personal evolution. The caritas processes provide the tenets for a professional approach to caring, a means by which to practice caring in a spiritual and loving fashion. The transpersonal caring relationship is a relationship that goes beyond one’s self and creates a change in the energetic environment of the nurse and the patient. A transpersonal caring relationship allows for a relationship between the souls of the individuals and because of this authentic relationship, optimal caring and healing can take place (Watson, 1985). In the transpersonal relationship the caregiver is aware of his/her intention and performs care that is emanating from the heart. When intentionality is focused and delivered from the heart, unseen energetic fields can change and promote an environment for healing. When a nurse is more conscious of his or her self and surroundings, he or she acts from a place of love with each caring moment. Caring moments are any moments in which a nurse has an interaction with a patient or family and is using the carative factors or the caritas process. In order for a caring moment to occur the participation of the nurse and the patient is required. Practice based on the carative factors presents an opportunity for both the nurse and patient to engage in a transpersonal caring REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 6 moment that benefits the mind, body and soul of each person. The caring/healing modalities are practices that enhance the ability of the care provider to engage in transpersonal relationship and caring moments. Caring/healing exercises can be as simple as centering, being attentive to touch or the communication of specific knowledge. The goal of using Watson’s principles in practice is to enhance the life and experience of the nurse and of the patient. Description of Theory Purpose The Theory of Human Caring was developed based on Watson’s desire to reestablish holistic practice in nursing care and move away from the cold and disconnected scientific model while infusing feeling and caring back into nursing practice (Watson, 2008). The purpose of the theory was to provide a philosophical-ethical foundation from which the nurse could provide care. The proposed benefit of this theory for both the nurse and the patient is that when each person reveals his or her authentic self and engages in interactions with another being, the energetic field around both of them will change and enhance the healing environment. The theory’s purpose is quite broad, promoting healing and oneness with the universe through caring. The positive impact of these practices is phenomenal and the beauty of the theory is that the caritas processes can be used to enhance any practice. When applied to nursing practice, the theory reestablishes Florence Nightingale’s vision that nursing is a spiritual calling. The deeper message within the theory is that being/relating to others from a place of love can transcend the planes and energetic fields of the universe and promote healing to one’s self and to",
"title": ""
},
{
"docid": "92c91a8e9e5eec86f36d790dec8020e7",
"text": "Aspect-based opinion mining, which aims to extract aspects and their corresponding ratings from customers reviews, provides very useful information for customers to make purchase decisions. In the past few years several probabilistic graphical models have been proposed to address this problem, most of them based on Latent Dirichlet Allocation (LDA). While these models have a lot in common, there are some characteristics that distinguish them from each other. These fundamental differences correspond to major decisions that have been made in the design of the LDA models. While research papers typically claim that a new model outperforms the existing ones, there is normally no \"one-size-fits-all\" model. In this paper, we present a set of design guidelines for aspect-based opinion mining by discussing a series of increasingly sophisticated LDA models. We argue that these models represent the essence of the major published methods and allow us to distinguish the impact of various design decisions. We conduct extensive experiments on a very large real life dataset from Epinions.com (500K reviews) and compare the performance of different models in terms of the likelihood of the held-out test set and in terms of the accuracy of aspect identification and rating prediction.",
"title": ""
},
{
"docid": "0dd1b31d778d30644ce405032729ad7a",
"text": "In order to save the cost and energy for PV system testing, a high efficiency solar array simulator (SAS) implemented by an LLC resonant DC/DC converter is proposed. This converter has zero voltage switching (ZVS) operation of the primary switches and zero current switching (ZCS) operation of the rectifier diodes. By frequency modulation control, the output impedance of an LLC converter can be regulated from zero to infinite without shunt or series resistor; hence, the efficiency of the proposed SAS can be significantly increased. According to the provided operation principles and design considerations of an LLC converter, a prototype is implemented to demonstrate the feasibility of the proposed SAS.",
"title": ""
},
{
"docid": "06a91d87398ef65bbfa95ab860972fbe",
"text": "A novel variable reluctance (VR) resolver with nonoverlapping tooth-coil windings is proposed in this paper. It significantly simplifies the manufacturing process of multilayer windings in conventional products. Finite element (FE) analysis is used to illustrate the basic operating principle, followed by analytical derivation of main parameters and optimization of major dimensions, including air-gap length and slot opening width. Based on winding distributions and FE results, it is shown that identical stator and winding can be employed for a resolver with three different numbers of rotor poles. Further, other stator slot/rotor pole combinations based on the nonoverlapping tooth-coil windings are generalized. In addition, the influence of eccentricity and end-winding leakage on the proposed topology is investigated. Finally, a prototype is fabricated and tested to verify the analysis, including main parameters and electrical angle error.",
"title": ""
},
{
"docid": "e1a18dfd191c0708565481b2c9decd6e",
"text": "The emergence of co-processors such as Intel Many Integrated Cores (MICs) is changing the landscape of supercomputing. The MIC is a memory constrained environment and its processors also operate at slower clock rates. Furthermore, the communication characteristics between MIC processes are also different compared to communication between host processes. Communication libraries that do not consider these architectural subtleties cannot deliver good communication performance. The performance of MPI collective operations strongly affect the performance of parallel applications. Owing to the challenges introduced by the emerging heterogeneous systems, it is critical to fundamentally re-design collective algorithms to ensure that applications can fully leverage the MIC architecture. In this paper, we propose a generic framework to optimize the performance of important collective operations, such as, MPI Bcast, MPI Reduce and MPI Allreduce, on Intel MIC clusters. We also present a detailed analysis of the compute phases in reduce operations for MIC clusters. To the best of our knowledge, this is the first paper to propose novel designs to improve the performance of collectives on MIC clusters. Our designs improve the latency of the MPI Bcast operation with 4,864 MPI processes by up to 76%. We also observe up to 52.4% improvements in the communication latency of the MPI Allreduce operation with 2K MPI processes on heterogeneous MIC clusters. Our designs also improve the execution time of the WindJammer application by up to 16%.",
"title": ""
},
{
"docid": "503c9c4d0d8f94d3e7a9ea8ee496e08b",
"text": "Memories for context become less specific with time resulting in animals generalizing fear from training contexts to novel contexts. Though much attention has been given to the neural structures that underlie the long-term consolidation of a context fear memory, very little is known about the mechanisms responsible for the increase in fear generalization that occurs as the memory ages. Here, we examine the neural pattern of activation underlying the expression of a generalized context fear memory in male C57BL/6J mice. Animals were context fear conditioned and tested for fear in either the training context or a novel context at recent and remote time points. Animals were sacrificed and fluorescent in situ hybridization was performed to assay neural activation. Our results demonstrate activity of the prelimbic, infralimbic, and anterior cingulate (ACC) cortices as well as the ventral hippocampus (vHPC) underlie expression of a generalized fear memory. To verify the involvement of the ACC and vHPC in the expression of a generalized fear memory, animals were context fear conditioned and infused with 4% lidocaine into the ACC, dHPC, or vHPC prior to retrieval to temporarily inactivate these structures. The results demonstrate that activity of the ACC and vHPC is required for the expression of a generalized fear memory, as inactivation of these regions returned the memory to a contextually precise form. Current theories of time-dependent generalization of contextual memories do not predict involvement of the vHPC. Our data suggest a novel role of this region in generalized memory, which should be incorporated into current theories of time-dependent memory generalization. We also show that the dorsal hippocampus plays a prolonged role in contextually precise memories. Our findings suggest a possible interaction between the ACC and vHPC controls the expression of fear generalization.",
"title": ""
},
{
"docid": "ccd054dc5e5ee5c6852b08fc777ea9a2",
"text": "The Retinex Theory first introduced by Edwin Land forty years ago has been widely used for a range of applications. It was first introduced as a model of our own visual processing but has since been used to perform a range of image processing tasks including illuminant correction, dynamic range compression, and gamut mapping. In this paper we show how the theory can be extended to perform yet another image processing task: that of removing shadows from images. Our method is founded on a simple modification to the original, path based retinex computation such that we incorporate information about the location of shadow edges in an image. We demonstrate that when the location of shadow edges is known the algorithm is able to remove shadows effectively. We also set forth a method for the automatic location of shadow edges which makes use of a 1-d illumination invariant image proposed in previous work [1]. In this case the location of shadow edges is imperfect but we show that even so, the algorithm does a good job of removing the shadows.",
"title": ""
},
{
"docid": "9d75520f138bcf7c529488f29d01efbb",
"text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.",
"title": ""
},
{
"docid": "839546257ee751a09334e2a7bd2fd18d",
"text": "BACKGROUND & AIMS\nMalnutrition is frequently observed in chronic and severe diseases and associated with impaired outcome. In Germany general data on prevalence and impact of hospital malnutrition are missing.\n\n\nMETHODS\nNutritional state was assessed by subjective global assessment (SGA) and by anthropometric measurements in 1,886 consecutively admitted patients in 13 hospitals (n=1,073, university hospitals; n=813, community or teaching hospitals). Risk factors for malnutrition and the impact of nutritional status on length of hospital stay were analyzed.\n\n\nRESULTS\nMalnutrition was diagnosed in 27.4% of patients according to SGA. A low arm muscle area and arm fat area were observed in 11.3% and 17.1%, respectively. Forty-three % of patients 70 years old were malnourished compared to only 7.8% of patients <30 years. The highest prevalence of malnutrition was observed in geriatric (56.2%), oncology (37.6%), and gastroenterology (32.6%) departments. Multivariate analysis revealed three independent risk factors: higher age, polypharmacy, and malignant disease (all P<0.01). Malnutrition was associated with an 43% increase of hospital stay (P<0.001).\n\n\nCONCLUSIONS\nIn German hospitals every fourth patient is malnourished. Malnutrition is associated with increased length of hospital stay. Higher age, malignant disease and major comorbidity were found to be the main contributors to malnutrition. Adequate nutritional support should be initiated in order to optimize the clinical outcome of these patients.",
"title": ""
},
{
"docid": "fe6e3ebd013c0a991f2bbff2c035111a",
"text": "Topic modeling with a tree-based prior has been used for a variety of applications because it can encode correlations between words that traditional topic modeling cannot. However, its expressive power comes at the cost of more complicated inference. We extend the SPARSELDA (Yao et al., 2009) inference scheme for latent Dirichlet allocation (LDA) to tree-based topic models. This sampling scheme computes the exact conditional distribution for Gibbs sampling much more quickly than enumerating all possible latent variable assignments. We further improve performance by iteratively refining the sampling distribution only when needed. Experiments show that the proposed techniques dramatically improve the computation time.",
"title": ""
},
{
"docid": "61c4146ac8b55167746d3f2b9c8b64e8",
"text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.",
"title": ""
},
{
"docid": "9bcf4fcb795ab4cfe4e9d2a447179feb",
"text": "In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the “inputs” into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified.",
"title": ""
},
{
"docid": "1c177a7fdbd15e04a6b122a284a9014a",
"text": "Malicious software installed on infected computers is a fundamental component of online crime. Malware development thus plays an essential role in the underground economy of cyber-crime. Malware authors regularly update their software to defeat defenses or to support new or improved criminal business models. A large body of research has focused on detecting malware, defending against it and identifying its functionality. In addition to these goals, however, the analysis of malware can provide a glimpse into the software development industry that develops malicious code.\n In this work, we present techniques to observe the evolution of a malware family over time. First, we develop techniques to compare versions of malicious code and quantify their differences. Furthermore, we use behavior observed from dynamic analysis to assign semantics to binary code and to identify functional components within a malware binary. By combining these techniques, we are able to monitor the evolution of a malware's functional components. We implement these techniques in a system we call Beagle, and apply it to the observation of 16 malware strains over several months. The results of these experiments provide insight into the effort involved in updating malware code, and show that Beagle can identify changes to individual malware components.",
"title": ""
},
{
"docid": "bb9f86e800e3f00bf7b34be85d846ff0",
"text": "This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
}
] |
scidocsrr
|
5012100e02654b61268f4e93e6a67521
|
Connecting Generative Adversarial Networks and Actor-Critic Methods
|
[
{
"docid": "a33cf416cf48f67cd0a91bf3a385d303",
"text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.",
"title": ""
}
] |
[
{
"docid": "da3ee32f1818f94ce30d300a80f08fff",
"text": "This paper presents a design of an Automatic Baby Cradle System which gives a reliable and efficient baby monitoring system that can play a significant role in providing better infant care. This system monitor parameters such as baby cry, environment temperature, moisture condition, and using cloud this information is accessed by parents to initiate the proper control actions. The system architecture consists of sensors for monitoring vital parameters, dc motor for cradle movement, cloud where data is stored and a sound buzzer all controlled by a single Arduino Mega microcontroller core.",
"title": ""
},
{
"docid": "291628b7e68f897bf23ca1ad1c0fdcfd",
"text": "Device-free Passive (DfP) human detection acts as a key enabler for emerging location-based services such as smart space, human-computer interaction, and asset security. A primary concern in devising scenario-tailored detecting systems is coverage of their monitoring units. While disk-like coverage facilitates topology control, simplifies deployment analysis, and is crucial for proximity-based applications, conventional monitoring units demonstrate directional coverage due to the underlying transmitter-receiver link architecture. To achieve omnidirectional coverage under such link-centric architecture, we propose the concept of omnidirectional passive human detection. The rationale is to exploit the rich multipath effect to blur the directional coverage. We harness PHY layer features to robustly capture the fine-grained multipath characteristics and virtually tune the shape of the coverage of the monitoring unit, which is previously prohibited with mere MAC layer RSSI. We design a fingerprinting scheme and a threshold-based scheme with off-the-shelf WiFi infrastructure and evaluate both schemes in typical clustered indoor scenarios. Experimental results demonstrate an average false positive of 8 percent and an average false negative of 7 percent for fingerprinting in detecting human presence in 4 directions. And both average false positive and false negative remain around 10 percent even with threshold-based methods.",
"title": ""
},
{
"docid": "e7bfcc9cf345ae1570f7dfddb8cf2444",
"text": "Motivated by the need to provide services to alleviate range anxiety of electric vehicles, we consider the problem of balancing charging demand across a network of charging stations. Our objective is to reduce the potential for excessively long queues to build up at some charging stations, although other charging stations are underutilized. A stochastic balancing algorithm is presented to achieve these goals. A further feature of this algorithm is that it is fully decentralized and facilitates a plug-and-play type of behavior. Using our system, the charging stations can join and leave the network without any changes to, or communication with, a centralized infrastructure. Analysis and simulations are presented to illustrate the efficacy of our algorithm.",
"title": ""
},
{
"docid": "d615916992e4b8a9b6f3040adace7b44",
"text": "The paper presents a new design of dual-mode dielectric-loaded rectangular cavity filters. The response of the filter is mainly controlled by the location and orientation of the coupling apertures with no intra-cavity coupling. Each dual-mode dielectric-loaded cavity generates and controls one transmission zero which can be placed on either side of the passband. Example filters which demonstrate the soundness of the design technique are presented.",
"title": ""
},
{
"docid": "ec755cd186c5d9bb6937d5170161e010",
"text": "Highly sophisticated artificial neural networks have achieved unprecedented performance across a variety of complex real-world problems over the past years, driven by the ability to detect significant patterns autonomously. Modern electronic stock markets produce large volumes of data, which are very suitable for use with these algorithms. This research explores new scientific ground by designing and evaluating a convolutional neural network in predicting future financial outcomes. A visually inspired transformation process translates high-frequency market microstructure data from the London Stock Exchange into four market-event based input channels, which are used to train six deep networks. Primary results indicate that con-volutional networks behave reasonably well on this task and extract interesting microstructure patterns, which are in line with previous theoretical findings. Furthermore, it demonstrates a new approach using modern deep-learning techniques for exploiting and analysing market microstructure behaviour.",
"title": ""
},
{
"docid": "4661b378eda6cd44c95c40ebf06b066b",
"text": "Speech signal degradation in real environments mainly results from room reverberation and concurrent noise. While human listening is robust in complex auditory scenes, current speech segregation algorithms do not perform well in noisy and reverberant environments. We treat the binaural segregation problem as binary classification, and employ deep neural networks (DNNs) for the classification task. The binaural features of the interaural time difference and interaural level difference are used as the main auditory features for classification. The monaural feature of gammatone frequency cepstral coefficients is also used to improve classification performance, especially when interference and target speech are collocated or very close to one another. We systematically examine DNN generalization to untrained spatial configurations. Evaluations and comparisons show that DNN-based binaural classification produces superior segregation performance in a variety of multisource and reverberant conditions.",
"title": ""
},
{
"docid": "f56d5487c5f59d9b951841b993cbec07",
"text": "We present Air+Touch, a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air 'pigtail' to copy text to the clipboard. Through an observational study, we devised a basic taxonomy of Air+Touch interactions, based on whether the in-air component occurs before, between or after touches. To illustrate the potential of our approach, we built four applications that showcase seven exemplar Air+Touch interactions we created.",
"title": ""
},
{
"docid": "a7b92390c6ecc3c2eef986a970d7f2e6",
"text": "Gingival recession related to periodontal disease or developmental problems can result in root sensitivity, root caries, and esthetically unacceptable root exposures. Consequently, root restorations are performed that often complicate, rather than resolve, the problems created by exposed roots. This article presents a predictable procedure for root coverage on areas of wide denudation in the maxilla and the mandible.",
"title": ""
},
{
"docid": "acf4f5fa5ae091b5e72869213deb643e",
"text": "A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.",
"title": ""
},
{
"docid": "ad091e4f66adb26d36abfc40377ee6ab",
"text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.",
"title": ""
},
{
"docid": "62e7bac45a035733b539d0853360c2c8",
"text": "192 words] Purpose: To develop a computer based method for the automated assessment of image quality in the context of diabetic retinopathy (DR) to guide the photographer. Methods: A deep learning framework was trained to grade the images automatically. A large representative set of 7000 color fundus images were used for the experiment which were obtained from the EyePACS (http://www.eyepacs.com/) that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorize these images into ‘accept’ and ‘reject’ classes based on the precise definition of image quality in the context of DR. A deep learning framework was trained using 3428 images. Results: A total of 3572 images were used for the evaluation of the proposed method. The method shows an accuracy of 100% to successfully categorise ‘accept’ and ‘reject’ images. Conclusion: Image quality is an essential prerequisite for the grading of DR. In this paper we have proposed a deep learning based automated image quality assessment method in the context of DR. The",
"title": ""
},
{
"docid": "83ed2dfe4456bc3cc8052747e7df7bfc",
"text": "Dietary restriction has been shown to have several health benefits including increased insulin sensitivity, stress resistance, reduced morbidity, and increased life span. The mechanism remains unknown, but the need for a long-term reduction in caloric intake to achieve these benefits has been assumed. We report that when C57BL6 mice are maintained on an intermittent fasting (alternate-day fasting) dietary-restriction regimen their overall food intake is not decreased and their body weight is maintained. Nevertheless, intermittent fasting resulted in beneficial effects that met or exceeded those of caloric restriction including reduced serum glucose and insulin levels and increased resistance of neurons in the brain to excitotoxic stress. Intermittent fasting therefore has beneficial effects on glucose regulation and neuronal resistance to injury in these mice that are independent of caloric intake.",
"title": ""
},
{
"docid": "3c29c0a3e8ec6292f05c7907436b5e9a",
"text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.",
"title": ""
},
{
"docid": "8873369cf69e5de3b97875404f6aea64",
"text": "BACKGROUND\nTobacco smoke exposure (TSE) is a worldwide health problem and it is considered a risk factor for pregnant women's and children's health, particularly for respiratory morbidity during the first year of life. Few significant birth cohort studies on the effect of prenatal TSE via passive and active maternal smoking on the development of severe bronchiolitis in early childhood have been carried out worldwide.\n\n\nMETHODS\nFrom November 2009 to December 2012, newborns born at ≥ 33 weeks of gestational age (wGA) were recruited in a longitudinal multi-center cohort study in Italy to investigate the effects of prenatal and postnatal TSE, among other risk factors, on bronchiolitis hospitalization and/or death during the first year of life.\n\n\nRESULTS\nTwo thousand two hundred ten newborns enrolled at birth were followed-up during their first year of life. Of these, 120 (5.4%) were hospitalized for bronchiolitis. No enrolled infants died during the study period. Prenatal passive TSE and maternal active smoking of more than 15 cigarettes/daily are associated to a significant increase of the risk of offspring children hospitalization for bronchiolitis, with an adjHR of 3.5 (CI 1.5-8.1) and of 1.7 (CI 1.1-2.6) respectively.\n\n\nCONCLUSIONS\nThese results confirm the detrimental effects of passive TSE and active heavy smoke during pregnancy for infants' respiratory health, since the exposure significantly increases the risk of hospitalization for bronchiolitis in the first year of life.",
"title": ""
},
{
"docid": "177db8a6f89528c1e822f52395a34468",
"text": "Design of a low-energy power-ON reset (POR) circuit is proposed to reduce the energy consumed by the stable supply of the dual supply static random access memory (SRAM), as the other supply is ramping up. The proposed POR circuit, when embedded inside dual supply SRAM, removes its ramp-up constraints related to voltage sequencing and pin states. The circuit consumes negligible energy during ramp-up, does not consume dynamic power during operations, and includes hysteresis to improve noise immunity against voltage fluctuations on the power supply. The POR circuit, designed in the 40-nm CMOS technology within 10.6-μm2 area, enabled 27× reduction in the energy consumed by the SRAM array supply during periphery power-up in typical conditions.",
"title": ""
},
{
"docid": "27f6a0f6eedba454c7385499a81a59a3",
"text": "In this paper we compare and evaluate the effectiveness of the brute force methodology using dataset of known password. It is a known fact that user chosen passwords are easily recognizable and crackable, by using several password recovery techniques; Brute force attack is one of them. For rescuing such attacks several organizations proposed the password creation rules which stated that password must include number and special characters for strengthening it and protecting against various password cracking attacks such as Dictionary attack, brute force attack etc. The result of this paper and proposed methodology helps in evaluating the system and account security for measuring the degree of authentication by estimating the password strength. The experiment is conducted on our proposed dataset (TG-DATASET) that contain an iterative procedure for creating the alphanumeric password string like a*, b*, c* and so on. The proposed dataset is prepared due to non-availability of iterative password in any existing password data sets.",
"title": ""
},
{
"docid": "c7d2419eaec21acce9b9dbb3040ed647",
"text": "Current text classification systems typically use term stems for representing document content. Ontologies allow the usage of features on a higher semantic level than single words for text classification purposes. In this paper we propose such an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting, a successful machine learning technique is used for classification. Comparative experimental evaluations in three different settings support our approach through consistent improvement of the results. An analysis of the results shows that this improvement is due to two separate effects.",
"title": ""
},
{
"docid": "494388072f3d7a62d00c5f3b5ad7a514",
"text": "Recent years have seen an increasing interest in providing accurate prediction models for electrical energy consumption. In Smart Grids, energy consumption optimization is critical to enhance power grid reliability, and avoid supply-demand mismatches. Utilities rely on real-time power consumption data from individual customers in their service area to forecast the future demand and initiate energy curtailment programs. Currently however, little is known about the differences in consumption characteristics of various customer types, and their impact on the prediction method’s accuracy. While many studies have concentrated on aggregate loads, showing that accurate consumption prediction at the building level can be achieved, there is a lack of results regarding individual customers consumption prediction. In this study, we perform an empirical quantitative evaluation of various prediction methods of kWh energy consumption of two distinct customer types: 1) small, highly variable individual customers, and 2) aggregated, more stable consumption at the building level. We show that prediction accuracy heavily depends on customer type. Contrary to previous studies, we consider the consumption data granularity to be very small (i.e., 15-min interval), and focus on very short term predictions (next few hours). As Smart Grids move closer to dynamic curtailment programs, which enables demand response (DR) events not only on weekdays, but also during weekends, existing DR strategies prove to be inadequate. Here, we relax the constraint of workdays, and include weekends, where ISO models consistently under perform. Nonetheless, we show that simple ISO baselines, and short-term Time Series, which only depend on recent historical data, achieve superior prediction accuracy. This result suggests that large amounts of historical training data are not required, rather they should be avoided.",
"title": ""
},
{
"docid": "129efeb93aad31aca7be77ef499398e2",
"text": "Using a Neonatal Intensive Care Unit (NICU) case study, this work investigates the current CRoss Industry Standard Process for Data Mining (CRISP-DM) approach for modeling Intelligent Data Analysis (IDA)-based systems that perform temporal data mining (TDM). The case study highlights the need for an extended CRISP-DM approach when modeling clinical systems applying Data Mining (DM) and Temporal Abstraction (TA). As the number of such integrated TA/DM systems continues to grow, this limitation becomes significant and motivated our proposal of an extended CRISP-DM methodology to support TDM, known as CRISP-TDM. This approach supports clinical investigations on multi-dimensional time series data. This research paper has three key objectives: 1) Present a summary of the extended CRISP-TDM methodology; 2) Demonstrate the applicability of the proposed model to the NICU data, focusing on the challenges associated with multi-dimensional time series data; and 3) Describe the proposed IDA architecture for applying integrated TDM.",
"title": ""
}
] |
scidocsrr
|
a21cffa47d0cef6ee67b9ea859eb8b3b
|
ARF-Predictor: Effective Prediction of Aging-Related Failure Using Entropy
|
[
{
"docid": "cbb03868af15c8b6b661b5550fa3829c",
"text": "Since the notion of software aging was introduced thirteen years ago, the interest in this phenomenon has been increasing from both academia and industry. The majority of the research efforts in studying software aging have focused on understanding its effects theoretically and empirically. However, conceptual aspects related to the foundation of this phenomenon have not been covered in the literature. This paper discusses foundational aspects of the software aging phenomenon, introducing new concepts and interconnecting them with the current body of knowledge, in order to compose a base taxonomy for the software aging research. Three real case studies are presented with the purpose of exemplifying many of the concepts discussed.",
"title": ""
},
{
"docid": "1aeeed59a3f10790e2a6d8d8e26ad964",
"text": "Concurrency bugs are widespread in multithreaded programs. Fixing them is time-consuming and error-prone. We present CFix, a system that automates the repair of concurrency bugs. CFix works with a wide variety of concurrency-bug detectors. For each failure-inducing interleaving reported by a bug detector, CFix first determines a combination of mutual-exclusion and order relationships that, once enforced, can prevent the buggy interleaving. CFix then uses static analysis and testing to determine where to insert what synchronization operations to force the desired mutual-exclusion and order relationships, with a best effort to avoid deadlocks and excessive performance losses. CFix also simplifies its own patches by merging fixes for related bugs. Evaluation using four different types of bug detectors and thirteen real-world concurrency-bug cases shows that CFix can successfully patch these cases without causing deadlocks or excessive performance degradation. Patches automatically generated by CFix are of similar quality to those manually written by developers.",
"title": ""
}
] |
[
{
"docid": "5f684d374cc52a485d2799c8db07d35b",
"text": "Online banking is the newest and least understood delivery channel for retail banking services. Yet, few, if any, studies were reported quantifying the issues relevant to this cutting-edge technology. This paper reports the results of a quantitative study of the perceptions of banks’ executive and IT managers and potential customers with regard to the drivers, development challenges, and expectations of online banking. The findings will be useful for both researchers and practitioners who seek to understand the issues relevant to online banking. # 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c09e5f5592caab9a076d92b4f40df760",
"text": "Producing a comprehensive overview of the chemical content of biologically-derived material is a major challenge. Apart from ensuring adequate metabolome coverage and issues of instrument dynamic range, mass resolution and sensitivity, there are major technical difficulties associated with data pre-processing and signal identification when attempting large scale, high-throughput experimentation. To address these factors direct infusion or flow infusion electrospray mass spectrometry has been finding utility as a high throughput metabolite fingerprinting tool. With little sample pre-treatment, no chromatography and instrument cycle times of less than 5 min it is feasible to analyse more than 1,000 samples per week. Data pre-processing is limited to aligning extracted mass spectra and mass-intensity matrices are generally ready in a working day for a month’s worth of data mining and hypothesis generation. ESI-MS fingerprinting has remained rather qualitative by nature and as such ion suppression does not generally compromise data information content as originally suggested when the methodology was first introduced. This review will describe how the quality of data has improved through use of nano-flow infusion and mass-windowing approaches, particularly when using high resolution instruments. The increasingly wider availability of robust high accurate mass instruments actually promotes ESI-MS from a merely fingerprinting tool to the ranks of metabolite profiling and combined with MS/MS capabilities of hybrid instruments improved structural information is available concurrently. We summarise current applications in a wide range of fields where ESI-MS fingerprinting has proved to be an excellent tool for “first pass” metabolome analysis of complex biological samples. The final part of the review describes a typical workflow with reference to recently published data to emphasise key aspects of overall experimental design.",
"title": ""
},
{
"docid": "83fba4d122d9c13c4492dfce9c8d8e89",
"text": "We propose two metrics to demonstrate the impact integrating human-computer interaction (HCI) activities in software engineering (SE) processes. User experience metric (UXM) is a product metric that measures the subjective and ephemeral notion of the user’s experience with a product. Index of integration (IoI) is a process metric that measures how integrated the HCI activities were with the SE process. Both metrics have an organizational perspective and can be applied to a wide range of products and projects. Attempt was made to keep the metrics light-weight. While the main motivation behind proposing the two metrics was to establish a correlation between them and thereby demonstrate the effectiveness of the process, several other applications are emerging. The two metrics were evaluated with three industry projects and reviewed by four faculty members from a university and modified based on the feedback.",
"title": ""
},
{
"docid": "073f129a34957b19c6d9af96c869b9ab",
"text": "The stability of dc microgrids (MGs) depends on the control strategy adopted for each mode of operation. In an islanded operation mode, droop control is the basic method for bus voltage stabilization when there is no communication among the sources. In this paper, it is shown the consequences of droop implementation on the voltage stability of dc power systems, whose loads are active and nonlinear, e.g., constant power loads. The set of parallel sources and their corresponding transmission lines are modeled by an ideal voltage source in series with an equivalent resistance and inductance. This approximate model allows performing a nonlinear stability analysis to predict the system qualitative behavior due to the reduced number of differential equations. Additionally, nonlinear analysis provides analytical stability conditions as a function of the model parameters and it leads to a design guideline to build reliable (MGs) based on safe operating regions.",
"title": ""
},
{
"docid": "8c0e5e48c8827a943f4586b8e75f4f9d",
"text": "Predicting the results of football matches poses an interesting challenge due to the fact that the sport is so popular and widespread. However, predicting the outcomes is also a difficult problem because of the number of factors which must be taken into account that cannot be quantitatively valued or modeled. As part of this work, a software solution has been developed in order to try and solve this problem. During the development of the system, a number of tests have been carried out in order to determine the optimal combination of features and classifiers. The results of the presented system show a satisfactory capability of prediction which is superior to the one of the reference method (most likely a priori outcome).",
"title": ""
},
{
"docid": "315e6c863c13dd6fa68620d2ffb66e17",
"text": "In this paper, an algorithm for approximating the path of a moving autonomous mobile sensor with an unknown position location using Received Signal Strength (RSS) measurements is proposed. Using a Least Squares (LS) estimation method as an input, a Maximum-Likelihood (ML) approach is used to determine the location of the unknown mobile sensor. For the mobile sensor case, as the sensor changes position the characteristics of the RSS measurements also change; therefore the proposed method adapts the RSS measurement model by dynamically changing the pass loss value alpha to aid in position estimation. Secondly, a Recursive Least-Squares (RLS) algorithm is used to estimate the path of a moving mobile sensor using the Maximum-Likelihood position estimation as an input. The performance of the proposed algorithm is evaluated via simulation and it is shown that this method can accurately determine the position of the mobile sensor, and can efficiently track the position of the mobile sensor during motion.",
"title": ""
},
{
"docid": "1e21662f93476663e01f721642c16336",
"text": "Inspired by the biological concept of central pattern generators (CPGs), this paper deals with adaptive walking control of biped robots. Using CPGs, a trajectory generator is designed consisting of a center-of-gravity (CoG) trajectory generator and a workspace trajectory modulation process. Entraining with feedback information, the CoG generator can generate adaptive CoG trajectories online and workspace trajectories can be modulated in real time based on the generated adaptive CoG trajectories. A motion engine maps trajectories from workspace to joint space. The proposed control strategy is able to generate adaptive joint control signals online to realize biped adaptive walking. The experimental results using a biped platform NAO confirm the effectiveness of the proposed control strategy.",
"title": ""
},
{
"docid": "9a5f5df096ad76798791e7bebd6f8c93",
"text": "Organisational Communication, in today’s organizations has not only become far more complex and varied but has become an important factor for overall organizational functioning and success. The way the organization communicates with its employees is reflected in morale, motivation and performance of the employees. The objective of the present paper is to explore the interrelationship between communication and motivation and its overall impact on employee performance. The paper focuses on the fact that communication in the workplace can take many forms and has a lasting effect on employee motivation. If employees feel that communication from management is effective, it can lead to feelings of job satisfaction, commitment to the organisation and increased trust in the workplace. This study was conducted through a comprehensive review and critical analysis of the research and literature focused upon the objectives of the paper. It also enumerates the results of a study of organizational communication and motivational practices followed at a large manufacturing company, Vanaz Engineers Ltd., based at Pune, to support the hypothesis propounded in the paper.",
"title": ""
},
{
"docid": "237a88ea092d56c6511bb84604e6a7c7",
"text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.",
"title": ""
},
{
"docid": "e6a92df6b717a55f86425b0164e9aa3a",
"text": "The COmpound Semiconductor Materials On Silicon (COSMOS) program of the U.S. Defense Advanced Research Projects Agency (DARPA) focuses on developing transistor-scale heterogeneous integration processes to intimately combine advanced compound semiconductor (CS) devices with high-density silicon circuits. The technical approaches being explored in this program include high-density micro assembly, monolithic epitaxial growth, and epitaxial layer printing processes. In Phase I of the program, performers successfully demonstrated world-record differential amplifiers through heterogeneous integration of InP HBTs with commercially fabricated CMOS circuits. In the current Phase II, complex wideband, large dynamic range, high-speed digital-to-analog convertors (DACs) are under development based on the above heterogeneous integration approaches. These DAC designs will utilize InP HBTs in the critical high-speed, high-voltage swing circuit blocks and will employ sophisticated in situ digital correction techniques enabled by CMOS transistors. This paper will also discuss the Phase III program plan as well as future directions for heterogeneous integration technology that will benefit mixed signal circuit applications.",
"title": ""
},
{
"docid": "6da5d72c237948b03cc6a818884ff937",
"text": "This paper develops a model of conversion behavior (i.e., converting store visits into purchases) that predicts each customers probability of purchasing based on an observed history of visits and purchases. We offer an individual-level probability model that allows for consumer heterogeneity in a very flexible manner. We allow visits to play very different roles in the purchasing process. For example, some visits are motivated by a planned purchase while others are simply browsing visits. The Conversion Model in this paper has the flexibility to accommodate a number of visit-to-purchase relationships. Finally, consumers shopping behavior may evolve over time as a function of past experiences. Thus, the Conversion Model also allows for non-stationarity in behavior. Specifically, our Conversion Model decomposes an individuals purchasing conversion behavior into a visit effect and a purchasing threshold effect. Each component is allowed to vary across households as well as over time. We then apply this model to the problem of managing visitor traffic. By predicting purchasing probabilities for a given visit, the Conversion Model can identify those visits that are likely to result in a purchase. These visits should be re-directed to a server that will provide a better shopping experience while those visitors that are less likely to result in a purchase may be identified as targets for a promotion.",
"title": ""
},
{
"docid": "455b2a46ef0a6a032686eaaedf9cacf3",
"text": "Recently, taxonomy has attracted much attention. Both automatic construction solutions and human-based computation approaches have been proposed. The automatic methods suffer from the problem of either low precision or low recall and human computation, on the other hand, is not suitable for large scale tasks. Motivated by the shortcomings of both approaches, we present a hybrid framework, which combines the power of machine-based approaches and human computation (the crowd) to construct a more complete and accurate taxonomy. Specifically, our framework consists of two steps: we first construct a complete but noisy taxonomy automatically, then crowd is introduced to adjust the entity positions in the constructed taxonomy. However, the adjustment is challenging as the budget (money) for asking the crowd is often limited. In our work, we formulate the problem of finding the optimal adjustment as an entity selection optimization (ESO) problem, which is proved to be NP-hard. We then propose an exact algorithm and a more efficient approximation algorithm with an approximation ratio of 1/2(1-1/e). We conduct extensive experiments on real datasets, the results show that our hybrid approach largely improves the recall of the taxonomy with little impairment for precision.",
"title": ""
},
{
"docid": "827396df94e0bca08cee7e4d673044ef",
"text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.",
"title": ""
},
{
"docid": "9756d72cfbb35d9a532f922e3eaccc8c",
"text": "Conceived in the early 1990s, Experience Replay (ER) has been shown to be a successful mechanism to allow online learning algorithms to reuse past experiences. Traditionally, ER can be applied to all machine learning paradigms (i.e., unsupervised, supervised, and reinforcement learning). Recently, ER has contributed to improving the performance of deep reinforcement learning. Yet, its application to many practical settings is still limited by the memory requirements of ER, necessary to explicitly store previous observations. To remedy this issue, we explore a novel approach, Online Contrastive Divergence with Generative Replay (OCDGR), which uses the generative capability of Restricted Boltzmann Machines (RBMs) instead of recorded past experiences. The RBM is trained online, and does not require the system to store any of the observed data points. We compare OCDGR to ER on 9 real-world datasets, considering a worst-case scenario (data points arriving in sorted order) as well as a more realistic one (sequential random-order data points). Our results show that in 64.28% of the cases OCDGR outperforms ER and in the remaining 35.72% it has an almost equal performance, while having a considerably reduced space complexity (i.e., memory usage) at a comparable time complexity.",
"title": ""
},
{
"docid": "acc700d965586f5ea65bdcb67af38fca",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.",
"title": ""
},
{
"docid": "91eaef6e482601533656ca4786b7a023",
"text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.",
"title": ""
},
{
"docid": "e144d8c0f046ad6cd2e5c71844b2b532",
"text": "Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.",
"title": ""
},
{
"docid": "58bc5fb67cfb5e4b623b724cb4283a17",
"text": "In recent years, power systems have been very difficult to manage as the load demands increase and environment constraints restrict the distribution network. One another mode used for distribution of Electrical power is making use of underground cables (generally in urban areas only) instead of overhead distribution network. The use of underground cables arise a problem of identifying the fault location as it is not open to view as in case of overhead network. To improve the reliability of a distribution system, accurate identification of a faulted segment is required in order to reduce the interruption time during fault. Speedy and precise fault location plays an important role in accelerating system restoration, reducing outage time, reducing great financial loss and significantly improving system reliability. The objective of this paper is to study the methods of determining the distance of underground cable fault from the base station in kilometers. Underground cable system is a common practice followed in major urban areas. While a fault occurs for some reason, at that time the repairing process related to that particular cable is difficult due to exact unknown location of the fault in the cable. In this paper, a technique for detecting faults in underground distribution system is presented. Proposed system is used to find out the exact location of the fault and to send an SMS with details to a remote mobile phone using GSM module.",
"title": ""
},
{
"docid": "cd6e01015c90b61cff1e5492a666a0e2",
"text": "The ubiquitin proteasome pathway (UPP) is essential for removing abnormal proteins and preventing accumulation of potentially toxic proteins within the neuron. UPP dysfunction occurs with normal aging and is associated with abnormal accumulation of protein aggregates within neurons in neurodegenerative diseases. Ischemia disrupts UPP function and thus may contribute to UPP dysfunction seen in the aging brain and in neurodegenerative diseases. Ubiquitin carboxy-terminal hydrolase L1 (UCHL1), an important component of the UPP in the neuron, is covalently modified and its activity inhibited by reactive lipids produced after ischemia. As a result, degradation of toxic proteins is impaired which may exacerbate neuronal function and cell death in stroke and neurodegenerative diseases. Preserving or restoring UCHL1 activity may be an effective therapeutic strategy in stroke and neurodegenerative diseases.",
"title": ""
},
{
"docid": "fb8fbcb1d2121f64e80e0e0236d7c29d",
"text": "This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.",
"title": ""
}
] |
scidocsrr
|
5ba8c0b70f6af4969283d83a9c816a8f
|
Distortion Models for Statistical Machine Translation
|
[
{
"docid": "8038e56b44b4f554dc8fed075910a6dc",
"text": "In this paper, we describe improved alignment models for statistical machine translation. The statistical translation approach uses two types of information: a translation model and a language model. The language model used is a bigram or general m-gram model. The translation model is decomposed into a lexical and an alignment model. We describe two different approaches for statistical translation and present experimental results. The first approach is based on dependencies between single words, the second approach explicitly takes shallow phrase structures into account, using two different alignment levels: a phrase level alignment between phrases and a word level alignment between single words. We present results using the Verbmobil task (German-English, 6000word vocabulary) which is a limited-domain spoken-language task. The experimental tests were performed on both the text transcription and the speech recognizer output. 1 S t a t i s t i c a l M a c h i n e T r a n s l a t i o n The goal of machine translation is the translation of a text given in some source language into a target language. We are given a source string f / = fl...fj...fJ, which is to be translated into a target string e{ = el...ei...ex. Among all possible target strings, we will choose the string with the highest probability: = argmax {Pr(ezIlflJ)}",
"title": ""
}
] |
[
{
"docid": "1655b927fa07bed8bf3769bf2dba01b6",
"text": "The non-central chi-square distribution plays an important role in communications, for example in the analysis of mobile and wireless communication systems. It not only includes the important cases of a squared Rayleigh distribution and a squared Rice distribution, but also the generalizations to a sum of independent squared Gaussian random variables of identical variance with or without mean, i.e., a \"squared MIMO Rayleigh\" and \"squared MIMO Rice\" distribution. In this paper closed-form expressions are derived for the expectation of the logarithm and for the expectation of the n-th power of the reciprocal value of a non-central chi-square random variable. It is shown that these expectations can be expressed by a family of continuous functions gm(ldr) and that these families have nice properties (monotonicity, convexity, etc.). Moreover, some tight upper and lower bounds are derived that are helpful in situations where the closed-form expression of gm(ldr) is too complex for further analysis.",
"title": ""
},
{
"docid": "033fb4c857f79fc593bd9a7e12269b49",
"text": "Within any Supply Chain Risk Management (SCRM) approach, the concept “Risk” occupies a central interest. Numerous frameworks which differ by the provided definitions and relationships between supply chain risk dimensions and metrics are available. This article provides an outline of the most common SCRM methodologies, in order to suggest an “integrated conceptual model”. The objective of such an integrated model is not to describe yet another conceptual model of Risk, but rather to offer a concrete structure incorporating the characteristics of the supply chain in the risk management process. The proposed alignment allows a better understanding of the dynamic of risk management strategies. Firstly, the model was analyzed through its positioning and its contributions compared to existing tools and models in the literature. This comparison highlights the critical points overlooked in the past. Secondly, the model was applied on case studies of major supply chain crisis.",
"title": ""
},
{
"docid": "88ea3f043b43a11a0a7d79e59a774c1f",
"text": "The purpose of this paper is to present an alternative systems thinking–based perspective and approach to the requirements elicitation process in complex situations. Three broad challenges associated with the requirements engineering elicitation in complex situations are explored, including the (1) role of the system observer, (2) nature of system requirements in complex situations, and (3) influence of the system environment. Authors have asserted that the expectation of unambiguous, consistent, complete, understandable, verifiable, traceable, and modifiable requirements is not consistent with complex situations. In contrast, complex situations are an emerging design reality for requirements engineering processes, marked by high levels of ambiguity, uncertainty, and emergence. This paper develops the argument that dealing with requirements for complex situations requires a change in paradigm. The elicitation of requirements for simple and technically driven systems is appropriately accomplished by proven methods. In contrast, the elicitation of requirements in complex situations (e.g., integrated multiple critical infrastructures, system-of-systems, etc.) requires more holistic thinking and can be enhanced by grounding in systems theory.",
"title": ""
},
{
"docid": "0444b38c0d20c999df4cb1294b5539c3",
"text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c92f14328d4f01c11eff94b073856d3f",
"text": "Whenever the need to compile a new dynamically typed language arises, an appealing option is to repurpose an existing statically typed language Just-In-Time (JIT) compiler (repurposed JIT compiler). Existing repurposed JIT compilers (RJIT compilers), however, have not yet delivered the hoped-for performance boosts. The performance of JVM languages, for instance, often lags behind standard interpreter implementations. Even more customized solutions that extend the internals of a JIT compiler for the target language compete poorly with those designed specifically for dynamically typed languages. Our own Fiorano JIT compiler is an example of this problem. As a state-of-the-art, RJIT compiler for Python, the Fiorano JIT compiler outperforms two other RJIT compilers (Unladen Swallow and Jython), but still shows a noticeable performance gap compared to PyPy, today's best performing Python JIT compiler. In this paper, we discuss techniques that have proved effective in the Fiorano JIT compiler as well as limitations of our current implementation. More importantly, this work offers the first in-depth look at benefits and limitations of the repurposed JIT compiler approach. We believe the most common pitfall of existing RJIT compilers is not focusing sufficiently on specialization, an abundant optimization opportunity unique to dynamically typed languages. Unfortunately, the lack of specialization cannot be overcome by applying traditional optimizations.",
"title": ""
},
{
"docid": "7e08a713a97f153cdd3a7728b7e0a37c",
"text": "The availability of long circulating, multifunctional polymers is critical to the development of drug delivery systems and bioconjugates. The ease of synthesis and functionalization make linear polymers attractive but their rapid clearance from circulation compared to their branched or cyclic counterparts, and their high solution viscosities restrict their applications in certain settings. Herein, we report the unusual compact nature of high molecular weight (HMW) linear polyglycerols (LPGs) (LPG - 100; M(n) - 104 kg mol(-1), M(w)/M(n) - 1.15) in aqueous solutions and its impact on its solution properties, blood compatibility, cell compatibility, in vivo circulation, biodistribution and renal clearance. The properties of LPG have been compared with hyperbranched polyglycerol (HPG) (HPG-100), linear polyethylene glycol (PEG) with similar MWs. The hydrodynamic size and the intrinsic viscosity of LPG-100 in water were considerably lower compared to PEG. The Mark-Houwink parameter of LPG was almost 10-fold lower than that of PEG. LPG and HPG demonstrated excellent blood and cell compatibilities. Unlike LPG and HPG, HMW PEG showed dose dependent activation of blood coagulation, platelets and complement system, severe red blood cell aggregation and hemolysis, and cell toxicity. The long blood circulation of LPG-100 (t(1/2β,) 31.8 ± 4 h) was demonstrated in mice; however, it was shorter compared to HPG-100 (t(1/2β,) 39.2 ± 8 h). The shorter circulation half life of LPG-100 was correlated with its higher renal clearance and deformability. Relatively lower organ accumulation was observed for LPG-100 and HPG-100 with some influence of on the architecture of the polymers. Since LPG showed better biocompatibility profiles, longer in vivo circulation time compared to PEG and other linear drug carrier polymers, and has multiple functionalities for conjugation, makes it a potential candidate for developing long circulating multifunctional drug delivery systems similar to HPG.",
"title": ""
},
{
"docid": "158c535b44fe81ca7194d5a0b386f2b5",
"text": "Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM) [1]. Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training autoencoders. Human observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures (L1 and L2 distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. We argue that significant advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.",
"title": ""
},
{
"docid": "38b93f50d4fc5a1029ebedb5a544987a",
"text": "We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.",
"title": ""
},
{
"docid": "01f25dcc13efd4c3a168b8acd9f0f2f7",
"text": "This paper describes an approach for the problem of face pose discrimination using Support Vector Machines (SVM). Face pose discrimination means that one can label the face image as one of several known poses. Face images are drawn from the standard FERET data base. The training set consists of 150 images equally distributed among frontal, approximately 33.75 rotated left and right poses, respectively, and the test set consists of 450 images again equally distributed among the three different types of poses. SVM achieved perfect accuracy 100% discriminating between the three possible face poses on unseen test data, using either polynomials of degree 3 or Radial Basis Functions (RBFs) as kernel approximation functions.",
"title": ""
},
{
"docid": "ed3129c0464090ed5bc80e2fff0fa945",
"text": "Electricity has been widely adopted in electric car, robot, and other systems as a power sources. A lot of robots, namely, fixed or mobile robots are studied, and researches are done to fuse the new & renewable energy with robot technology to solve the energy supply problems. For the mobile robots, the energies are usually supplied through the battery, and this causes the limitations of the activity time, the discontinuation of activity, and the range of activity. Therefore, solving the problems and securing the continuous energy supply scheme for extending the running time of the mobile robots can show the advantages for the robot activity and achievements. In the study, we suggest a scheme for the battery replacing method for the mobile robot.",
"title": ""
},
{
"docid": "e2d39e2714351b04054b871fa8a7a2fa",
"text": "In this letter, we propose sparsity-based coherent and noncoherent dictionaries for action recognition. First, the input data are divided into different clusters and the number of clusters depends on the number of action categories. Within each cluster, we seek data items of each action category. If the number of data items exceeds threshold in any action category, these items are labeled as coherent. In a similar way, all coherent data items from different clusters form a coherent group of each action category, and data that are not part of the coherent group belong to noncoherent group of each action category. These coherent and noncoherent groups are learned using K-singular value decomposition dictionary learning. Since the coherent group has more similarity among data, only few atoms need to be learned. In the noncoherent group, there is a high variability among the data items. So, we propose an orthogonal-projection-based selection to get optimal dictionary in order to retain maximum variance in the data. Finally, the obtained dictionary atoms of both groups in each action category are combined and then updated using the limited Broyden–Fletcher–Goldfarb–Shanno optimization algorithm. The experiments are conducted on challenging datasets HMDB51 and UCF50 with action bank features and achieve comparable result using this state-of-the-art feature.",
"title": ""
},
{
"docid": "de3f2ad88e3a99388975cc3da73e5039",
"text": "Machine-learning techniques have recently been proved to be successful in various domains, especially in emerging commercial applications. As a set of machine-learning techniques, artificial neural networks (ANNs), requiring considerable amount of computation and memory, are one of the most popular algorithms and have been applied in a broad range of applications such as speech recognition, face identification, natural language processing, ect. Conventionally, as a straightforward way, conventional CPUs and GPUs are energy-inefficient due to their excessive effort for flexibility. According to the aforementioned situation, in recent years, many researchers have proposed a number of neural network accelerators to achieve high performance and low power consumption. Thus, the main purpose of this literature is to briefly review recent related works, as well as the DianNao-family accelerators. In summary, this review can serve as a reference for hardware researchers in the area of neural networks.",
"title": ""
},
{
"docid": "85a28a33d52ecea11b76d47e9cbf14de",
"text": "Currently, use and disposal of plastic by consumers through waste management activities in Ghana not only creates environmental problems, but also reinforces the notion of a wasteful society. The magnitude of this problem has led to increasing pressure from the public for efficient and practical measures to solve the waste problem. This paper analyses the impact of plastic use and disposal in Ghana. It emphasizes the need for commitment to proper management of the impacts of plastic waste and effective environmental management in the country. Sustainable Solid Waste Management (SSWM) is a critical problem for developing countries with regards to climate change and greenhouse gas emission, and also the general wellbeing of the populace. Key themes of this paper are producer responsibility and management of products at end of life. The paper proposes two theatrical recovery models that can be used to address the issue of sachet waste in Ghana.",
"title": ""
},
{
"docid": "57233e0b2c7ef60cc505cd23492a2e03",
"text": "In nature, the eastern North American monarch population is known for its southward migration during the late summer/autumn from the northern USA and southern Canada to Mexico, covering thousands of miles. By simplifying and idealizing the migration of monarch butterflies, a new kind of nature-inspired metaheuristic algorithm, called monarch butterfly optimization (MBO), a first of its kind, is proposed in this paper. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. southern Canada and the northern USA (Land 1) and Mexico (Land 2). Accordingly, the positions of the monarch butterflies are updated in two ways. Firstly, the offsprings are generated (position updating) by migration operator, which can be adjusted by the migration ratio. It is followed by tuning the positions for other butterflies by means of butterfly adjusting operator. In order to keep the population unchanged and minimize fitness evaluations, the sum of the newly generated butterflies in these two ways remains equal to the original population. In order to demonstrate the superior performance of the MBO algorithm, a comparative study with five other metaheuristic algorithms through thirty-eight benchmark problems is carried out. The results clearly exhibit the capability of the MBO method toward finding the enhanced function values on most of the benchmark problems with respect to the other five algorithms. Note that the source codes of the proposed MBO algorithm are publicly available at GitHub ( https://github.com/ggw0122/Monarch-Butterfly-Optimization , C++/MATLAB) and MATLAB Central ( http://www.mathworks.com/matlabcentral/fileexchange/50828-monarch-butterfly-optimization , MATLAB).",
"title": ""
},
{
"docid": "cab97e23b7aa291709ecf18e29f580cf",
"text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.",
"title": ""
},
{
"docid": "bbdc213c082fd0573add260e99447f2d",
"text": "Received: May 17, 2015. Received in revised form: October 15, 2015. Accepted: October 25, 2015. Although construction has been known as a highly complex application field for autonomous robotic systems, recent advances in this field offer great hope for using robotic capabilities to develop automated construction. Today, space research agencies seek to build infrastructures without human intervention, and construction companies look to robots with the potential to improve construction quality, efficiency, and safety, not to mention flexibility in architectural design. However, unlike production robots used, for instance, in automotive industries, autonomous robots should be designed with special consideration for challenges such as the complexity of the cluttered and dynamic working space, human-robot interactions and inaccuracy in positioning due to the nature of mobile systems and the lack of affordable and precise self-positioning solutions. This paper briefly reviews state-ofthe-art research into automated construction by autonomous mobile robots. We address and classify the relevant studies in terms of applications, materials, and robotic systems. We also identify ongoing challenges and discuss about future robotic requirements for automated construction.",
"title": ""
},
{
"docid": "77222e2a34cba752b133502bd816f9ab",
"text": "To describe the use of a local hemostatic agent (LHA) for the management of postpartum hemorrhage (PPH) due to bleeding of the placental bed in patients taken to caesarean section at Fundación Santa Fe de Bogotá University Hospital. A total of 41 pregnant women who had a caesarean section and developed PPH. A cross-sectional study. Analysis of all cases of PPH during caesarean section presented from 2006 up to and including 2012 at Fundación Santa Fe de Bogotá University Hospital. Emergency hysterectomy due to PPH. The proportion of hysterectomies was 5 vs. 66 % for the group that received and did not receive management with a LHA respectively (PR 0.07, CI 95 % 0.01–0.51 p < 0.01). For the group managed without a LHA, 80 % of patients needed hemoderivatives transfusion vs. 20 % of patients in the group managed with a LHA (PR 0.24, CI 95 % 0.1–0.6 p < 0.01). A reduction in the mean days of hospitalization in addition to a descent in the proportion of patients admitted to the intensive care unit (ICU) was noticed when comparing the group that received a LHA versus the one that did not. An inverse association between the use of a LHA in patients with PPH due to bleeding of the placental bed and the need to perform an emergency obstetric hysterectomy was observed. Additionally there was a significant reduction in the mean duration of hospital stay, use of hemoderivatives and admission to the ICU.",
"title": ""
},
{
"docid": "8721382dd1674fac3194d015b9c64f94",
"text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves",
"title": ""
},
{
"docid": "4b4ff17023cf54fe552697ef83c83926",
"text": "Artificial intelligence has been an active branch of research for computer scientists and psychologists for 50 years. The concept of mimicking human intelligence in a computer fuels the public imagination and has led to countless academic papers, news articles and fictional works. However, public expectations remain largely unfulfilled, owing to the incredible complexity of everyday human behavior. A wide range of tools and techniques have emerged from the field of artificial intelligence, many of which are reviewed here. They include rules, frames, model-based reasoning, case-based reasoning, Bayesian updating, fuzzy logic, multiagent systems, swarm intelligence, genetic algorithms, neural networks, and hybrids such as blackboard systems. These are all ingenious, practical, and useful in various contexts. Some approaches are pre-specified and structured, while others specify only low-level behavior, leaving the intelligence to emerge through complex interactions. Some approaches are based on the use of knowledge expressed in words and symbols, whereas others use only mathematical and numerical constructions. It is proposed that there exists a spectrum of intelligent behaviors from low-level reactive systems through to high-level systems that encapsulate specialist expertise. Separate branches of research have made strides at both ends of the spectrum, but difficulties remain in devising a system that spans the full spectrum of intelligent behavior, including the difficult areas in the middle that include common sense and perception. Artificial intelligence is increasingly appearing in situated systems that interact with their physical environment. As these systems become more compact they are likely to become embedded into everyday equipment. As the 50th anniversary approaches of the Dartmouth conference where the term ‘artificial intelligence’ was first published, it is concluded that the field is in good shape and has delivered some great results. Yet human thought processes are incredibly complex, and mimicking them convincingly remains an elusive challenge. ADVANCES IN COMPUTERS, VOL. 65 1 Copyright © 2005 Elsevier Inc. ISSN: 0065-2458/DOI 10.1016/S0065-2458(05)65001-2 All rights reserved.",
"title": ""
}
] |
scidocsrr
|
e8b37299160062eeb75df1e3a35e51eb
|
A COMPARATIVE STUDY OF SENTIMENT ANALYSIS TECHNIQUES 1
|
[
{
"docid": "88e535a63f5c594edb18167ec8a78750",
"text": "Finding the weakness of the products from the customers’ feedback can help manufacturers improve their product quality and competitive strength. In recent years, more and more people express their opinions about products online, and both the feedback of manufacturers’ products or their competitors’ products could be easily collected. However, it’s impossible for manufacturers to read every review to analyze the weakness of their products. Therefore, finding product weakness from online reviews becomes a meaningful work. In this paper, we introduce such an expert system, Weakness Finder, which can help manufacturers find their product weakness from Chinese reviews by using aspects based sentiment analysis. An aspect is an attribute or component of a product, such as price, degerm, moisturizing are the aspects of the body wash products. Weakness Finder extracts the features and groups explicit features by using morpheme based method and Hownet based similarity measure, and identify and group the implicit features with collocation selection method for each aspect. Then utilize sentence based sentiment analysis method to determine the polarity of each aspect in sentences. The weakness of product could be found because the weakness is probably the most unsatisfied aspect in customers’ reviews, or the aspect which is more unsatisfied when compared with their competitor’s product reviews. Weakness Finder has been used to help a body wash manufacturer find their product weakness, and our experimental results demonstrate the good performance of the Weakness Finder. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6081f8b819133d40522a4698d4212dfc",
"text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "e6a5ff945613e3b4db9df925d4ff7d28",
"text": "Fear recognition, which aims at predicting whether a movie segment can induce fear or not, is a promising area in movie emotion recognition. Research in this area, however, has reached a bottleneck. Difficulties may partly result from the imbalanced database. In this paper, we propose an imbalance learning-based framework for movie fear recognition. A data rebalance module is adopted before classification. Several sampling methods, including the proposed softsampling and hardsampling which combine the merits of both undersampling and oversampling, are explored in this module. Experiments are conducted on the MediaEval 2017 Emotional Impact of Movies Task. Compared with the current state-of-the-art, we achieve an improvement of 8.94% on F1, proving the effectiveness of proposed framework.",
"title": ""
},
{
"docid": "24c49ac0ed56f27982cfdad18054e466",
"text": "This paper examines two alternative approaches to supporting code scheduling for multiple-instruction-issue processors. One is to provide a set of non-trapping instructions so that the compiler can perform aggressive static code scheduling. The application of this approach to existing commercial architectures typically requires extending the instruction set. The other approach is to support out-of-order execution in the microarchitecture so that the hardware can perform aggressive dynamic code scheduling. This approach usually does not require modifying the instruction set but requires complex hardware support. In this paper, we analyze the performance of the two alternative approaches using a set of important nonnumerical C benchmark programs. A distinguishing feature of the experiment is that the code for the dynamic approach has been optimized and scheduled as much as allowed by the architecture. The hardware is only responsible for the additional reordering that cannot be performed by the compiler. The overall result is that the clynamic and static approaches are comparable in performance. When applied to a four-instruction-issue processor, both methods achieve more than two times speedup over a high performance single-instruction-issue processor. However, the performance of each scheme varies among the benchmark programs. To explain this variation, we have identified the conditions in these programs that make one approach perform better than the other.",
"title": ""
},
{
"docid": "b439ba7fbb20a14ad874c533aa5b07b3",
"text": "This paper is inspired by how cognitive control manifests itself in the human brain and does so in a remarkable way. It addresses the many facets involved in the control of directed information flow in a dynamic system, culminating in the notion of information gap, defined as the difference between relevant information (useful part of what is extracted from the incoming measurements) and sufficient information representing the information needed for achieving minimal risk. The notion of information gap leads naturally to how cognitive control can itself be defined. Then, another important idea is described, namely the two-state model, in which one is the system's state and the other is the entropic state that provides an essential metric for quantifying the information gap. The entropic state is computed in the perceptual part (i.e., perceptor) of the dynamic system and sent to the controller directly as feedback information. This feedback information provides the cognitive controller the information needed about the environment and the system to bring reinforcement leaning into play; reinforcement learning (RL), incorporating planning as an integral part, is at the very heart of cognitive control. The stage is now set for a computational experiment, involving cognitive radar wherein the cognitive controller is enabled to control the receiver via the environment. The experiment demonstrates how RL provides the mechanism for improved utilization of computational resources, and yet is able to deliver good performance through the use of planning. The paper finishes with concluding remarks.",
"title": ""
},
{
"docid": "14e8006ae1fc0d97e737ff2a5a4d98dd",
"text": "Building dialogue systems that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human utterances in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialogue. Our model represents the first attempt to integrating a large commonsense knowledge base into end-toend conversational models. In the retrieval-based scenario, we propose a model to jointly take into account message content and related commonsense for selecting an appropriate response. Our experiments suggest that the knowledgeaugmented models are superior to their knowledge-free counterparts.",
"title": ""
},
{
"docid": "e10886264acb1698b36c4d04cf2d9df6",
"text": "† This work was supported by the RGC CERG project PolyU 5065/98E and the Departmental Grant H-ZJ84 ‡ Corresponding author ABSTRACT Pattern discovery from time series is of fundamental importance. Particularly when the domain expert derived patterns do not exist or are not complete, an algorithm to discover specific patterns or shapes automatically from the time series data is necessary. Such an algorithm is noteworthy in that it does not assume prior knowledge of the number of interesting structures, nor does it require an exhaustive explanation of the patterns being described. In this paper, a clustering approach is proposed for pattern discovery from time series. In view of its popularity and superior clustering performance, the self-organizing map (SOM) was adopted for pattern discovery in temporal data sequences. It is a special type of clustering algorithm that imposes a topological structure on the data. To prepare for the SOM algorithm, data sequences are segmented from the numerical time series using a continuous sliding window. Similar temporal patterns are then grouped together using SOM into clusters, which may subsequently be used to represent different structures of the data or temporal patterns. Attempts have been made to tackle the problem of representing patterns in a multi-resolution manner. With the increase in the number of data points in the patterns (the length of patterns), the time needed for the discovery process increases exponentially. To address this problem, we propose to compress the input patterns by a perceptually important point (PIP) identification algorithm. The idea is to replace the original data segment by its PIP’s so that the dimensionality of the input pattern can be reduced. Encouraging results are observed and reported for the application of the proposed methods to the time series collected from the Hong Kong stock market.",
"title": ""
},
{
"docid": "50b6f8067784fe4b9b3adf6db17ab4d1",
"text": "Available online 23 November 2012",
"title": ""
},
{
"docid": "0e0c0a2a523ac7d39f8cfdb8fc69d6f0",
"text": "This paper surveys recent literature in the area of Neural Network, Data Mining, Hidden Markov Model and Neuro-Fuzzy system used to predict the stock market fluctuation. Neural Networks and Neuro-Fuzzy systems are identified to be the leading machine learning techniques in stock market index prediction area. The Traditional techniques are not cover all the possible relation of the stock price fluctuations. There are new approaches to known in-depth of an analysis of stock price variations. NN and Markov Model can be used exclusively in the finance markets and forecasting of stock price. In this paper, we propose a forecasting method to provide better an accuracy rather traditional method.",
"title": ""
},
{
"docid": "55903de2bf1c877fac3fdfc1a1db68fc",
"text": "UK small to medium sized enterprises (SMEs) are suffering increasing levels of cybersecurity breaches and are a major point of vulnerability in the supply chain networks in which they participate. A key factor for achieving optimal security levels within supply chains is the management and sharing of cybersecurity information associated with specific metrics. Such information sharing schemes amongst SMEs in a supply chain network, however, would give rise to a certain level of risk exposure. In response, the purpose of this paper is to assess the implications of adopting select cybersecurity metrics for information sharing in SME supply chain consortia. Thus, a set of commonly used metrics in a prototypical cybersecurity scenario were chosen and tested from a survey of 17 UK SMEs. The results were analysed in respect of two variables; namely, usefulness of implementation and willingness to share across supply chains. Consequently, we propose a Cybersecurity Information Sharing Taxonomy for identifying risk exposure categories for SMEs sharing cybersecurity information, which can be applied to developing Information Sharing Agreements (ISAs) within SME supply chain consortia.",
"title": ""
},
{
"docid": "2c1de0ee482b3563c6b0b49bfdbbe508",
"text": "The paper summarizes our research in the area of unsupervised categorization of Wikipedia articles. As a practical result of our research, we present an application of spectral clustering algorithm used for grouping Wikipedia search results. The main contribution of the paper is a representation method for Wikipedia articles that has been based on combination of words and links and used for categoriation of search results in this repository. We evaluate the proposed approach with Primary Component projections and show, on the test data, how usage of cosine transformation to create combined representations influence data variability. On sample test datasets, we also show how combined representation improves the data separation that increases overall results of data categorization. To implement the system, we review the main spectral clustering methods and we test their usability for text categorization. We give a brief description of the system architecture that groups online Wikipedia articles retrieved with user-specified keywords. Using the system, we show how clustering increases information retrieval effectiveness for Wikipedia data repository.",
"title": ""
},
{
"docid": "228c59c9bf7b4b2741567bffb3fcf73f",
"text": "This paper presents a new PSO-based optimization DBSCAN space clustering algorithm with obstacle constraints. The algorithm introduces obstacle model and simplifies two-dimensional coordinates of the cluster object coding to one-dimensional, then uses the PSO algorithm to obtain the shortest path and minimum obstacle distance. At the last stage, this paper fulfills spatial clustering based on obstacle distance. Theoretical analysis and experimental results show that the algorithm can get high-quality clustering result of space constraints with more reasonable and accurate quality.",
"title": ""
},
{
"docid": "15dc2cd497f782d16311cd0e658e2e90",
"text": "We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must first acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacrificing much flexibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation.",
"title": ""
},
{
"docid": "f7c2ebd19c41b697d52850a225bfe8a0",
"text": "There is currently a misconception among designers and users of free space laser communication (lasercom) equipment that 1550 nm light suffers from less atmospheric attenuation than 785 or 850 nm light in all weather conditions. This misconception is based upon a published equation for atmospheric attenuation as a function of wavelength, which is used frequently in the free-space lasercom literature. In hazy weather (visibility > 2 km), the prediction of less atmospheric attenuation at 1550 nm is most likely true. However, in foggy weather (visibility < 500 m), it appears that the attenuation of laser light is independent of wavelength, ie. 785 nm, 850 nm, and 1550 nm are all attenuated equally by fog. This same wavelength independence is also observed in snow and rain. This observation is based on an extensive literature search, and from full Mie scattering calculations. A modification to the published equation describing the atmospheric attenuation of laser power, which more accurately describes the effects of fog, is offered. This observation of wavelength-independent attenuation in fog is important, because fog, heavy snow, and extreme rain are the only types of weather that are likely to disrupt short (<500 m) lasercom links. Short lasercom links will be necessary to meet the high availability requirements of the telecommunications industry.",
"title": ""
},
{
"docid": "2fdcfab59f54410627ed13c2e46689cd",
"text": "The field of software visualization (SV) investigates approaches and techniques for static and dynamic graphical representations of algorithms, programs (code), and processed data. SV is concerned primarily with the analysis of programs and their development. The goal is to improve our understanding of inherently invisible and intangible software, particularly when dealing with large information spaces that characterize domains like software maintenance, reverse engineering, and collaborative development. The main challenge is to find effective mappings from different software aspects to graphical representations using visual metaphors. This paper provides an overview of the SV research, describes current research directions, and includes an extensive list of recommended readings.",
"title": ""
},
{
"docid": "8759277ebf191306b3247877e2267173",
"text": "As organizations scale up, their collective knowledge increases, and the potential for serendipitous collaboration between members grows dramatically. However, finding people with the right expertise or interests becomes much more difficult. Semi-structured social media, such as blogs, forums, and bookmarking, present a viable platform for collaboration-if enough people participate, and if shared content is easily findable. Within the trusted confines of an organization, users can trade anonymity for a rich identity that carries information about their role, location, and position in its hierarchy.\n This paper describes WaterCooler, a tool that aggregates shared internal social media and cross-references it with an organization's directory. We deployed WaterCooler in a large global enterprise and present the results of a preliminary user study. Despite the lack of complete social networking affordances, we find that WaterCooler changed users' perceptions of their workplace, made them feel more connected to each other and the company, and redistributed users' attention outside their own business groups.",
"title": ""
},
{
"docid": "7c6708511e8a19c7a984ccc4b5c5926e",
"text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.",
"title": ""
},
{
"docid": "8cdd54a8bd288692132b57cb889b2381",
"text": "This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.",
"title": ""
},
{
"docid": "48019a3106c6d74e4cfcc5ac596d4617",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "17a1ff743fa32c6cada17772a8e78960",
"text": "Made of flexible material, requiring ultra-low power consumption, cheap to manufacture, and most importantly, easy and convenient to read, Epapers of the future are just around the corner, with the promise to hold libraries on a chip and replace most printed newspapers before the end of the next decade. This paper discusses the history, features, and technology of the electronic paper revolution. It also highlights the challenges facing E-paper and its various applications. The paper concludes that E-paper, which can be termed as the second paper revolution, is closer to changing the way we read, write and study; a revolution so phenomenal that some researchers see it as second only to the invention of the printing press by Gutenberg in the 15th century. (",
"title": ""
},
{
"docid": "2e87c4fbb42424f3beb07e685c856487",
"text": "Conventional wisdom ties the origin and early evolution of the genus Homo to environmental changes that occurred near the end of the Pliocene. The basic idea is that changing habitats led to new diets emphasizing savanna resources, such as herd mammals or underground storage organs. Fossil teeth provide the most direct evidence available for evaluating this theory. In this paper, we present a comprehensive study of dental microwear in Plio-Pleistocene Homo from Africa. We examined all available cheek teeth from Ethiopia, Kenya, Tanzania, Malawi, and South Africa and found 18 that preserved antemortem microwear. Microwear features were measured and compared for these specimens and a baseline series of five extant primate species (Cebus apella, Gorilla gorilla, Lophocebus albigena, Pan troglodytes, and Papio ursinus) and two protohistoric human foraging groups (Aleut and Arikara) with documented differences in diet and subsistence strategies. Results confirmed that dental microwear reflects diet, such that hard-object specialists tend to have more large microwear pits, whereas tough food eaters usually have more striations and smaller microwear features. Early Homo specimens clustered with baseline groups that do not prefer fracture resistant foods. Still, Homo erectus and individuals from Swartkrans Member 1 had more small pits than Homo habilis and specimens from Sterkfontein Member 5C. These results suggest that none of the early Homo groups specialized on very hard or tough foods, but that H. erectus and Swartkrans Member 1 individuals ate, at least occasionally, more brittle or tough items than other fossil hominins studied.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
}
] |
scidocsrr
|
2adbb5e1dae66d45f8686037d320c47b
|
Optimal User Scheduling and Power Allocation for Millimeter Wave NOMA Systems
|
[
{
"docid": "47d997ef6c4f70105198415002c2c5dc",
"text": "The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multiple-input multiple-output (MIMO) system and a downlink multi-user multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used.",
"title": ""
}
] |
[
{
"docid": "7912241009e05de6af4e41aa2f48a1ec",
"text": "CONTEXT/OBJECTIVE\nNot much is known about the implication of adipokines and different cytokines in gestational diabetes mellitus (GDM) and macrosomia. The purpose of this study was to assess the profile of these hormones and cytokines in macrosomic babies, born to gestational diabetic women.\n\n\nDESIGN/SUBJECTS\nA total of 59 women (age, 19-42 yr) suffering from GDM with their macrosomic babies (4.35 +/- 0.06 kg) and 60 healthy age-matched pregnant women and their newborns (3.22 +/- 0.08 kg) were selected.\n\n\nMETHODS\nSerum adipokines (adiponectin and leptin) were quantified using an obesity-related multiple ELISA microarray kit. The concentrations of serum cytokines were determined by ELISA.\n\n\nRESULTS\nSerum adiponectin levels were decreased, whereas the concentrations of leptin, inflammatory cytokines, such as IL-6 and TNF-alpha, were significantly increased in gestational diabetic mothers compared with control women. The levels of these adipocytokines were diminished in macrosomic babies in comparison with their age-matched control newborns. Serum concentrations of T helper type 1 (Th1) cytokines (IL-2 and interferon-gamma) were decreased, whereas IL-10 levels were significantly enhanced in gestational diabetic mothers compared with control women. Macrosomic children exhibited high levels of Th1 cytokines and low levels of IL-10 compared with control infants. Serum IL-4 levels were not altered between gestational diabetic mothers and control mothers or the macrosomic babies and newborn control babies.\n\n\nCONCLUSIONS\nGDM is linked to the down-regulation of adiponectin along with Th1 cytokines and up-regulation of leptin and inflammatory cytokines. Macrosomia was associated with the up-regulation of Th1 cytokines and the down-regulation of the obesity-related agents (IL-6 and TNF-alpha, leptin, and adiponectin).",
"title": ""
},
{
"docid": "ff25e0ec49ee8b5c85afbbdefd8ca837",
"text": "In the field of psychology, the practice of p value null-hypothesis testing is as widespread as ever. Despite this popularity, or perhaps because of it, most psychologists are not aware of the statistical peculiarities of the p value procedure. In particular, p values are based on data that were never observed, and these hypothetical data are themselves influenced by subjective intentions. Moreover, p values do not quantify statistical evidence. This article reviews these p value problems and illustrates each problem with concrete examples. The three problems are familiar to statisticians but may be new to psychologists. A practical solution to these p value problems is to adopt a model selection perspective and use the Bayesian information criterion (BIC) for statistical inference (Raftery, 1995). The BIC provides an approximation to a Bayesian hypothesis test, does not require the specification of priors, and can be easily calculated from SPSS output.",
"title": ""
},
{
"docid": "67e85e8b59ec7dc8b0019afa8270e861",
"text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.",
"title": ""
},
{
"docid": "8933d92ec139e80ffb8f0ebaa909d76c",
"text": "Reading an article and answering questions about its content is a fundamental task for natural language understanding. While most successful neural approaches to this problem rely on recurrent neural networks (RNNs), training RNNs over long documents can be prohibitively slow. We present a novel framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance. Our approach combines a coarse, inexpensive model for selecting one or more relevant sentences and a more expensive RNN that produces the answer from those sentences. A central challenge is the lack of intermediate supervision for the coarse model, which we address using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a newly-gathered dataset, while reducing the number of sequential RNN steps by 88% against a standard sequence to sequence model.",
"title": ""
},
{
"docid": "7165568feac9cc0bc0c1056b930958b8",
"text": "We describe a 63-year-old woman with an asymptomatic papular eruption on the vulva. Clinically, the lesions showed multiple pin-head-sized whitish papules on the labia major. Histologically, the biopsy specimen showed acantholysis throughout the epidermis with the presence of dyskeratotic cells resembling corps ronds and grains, hyperkeratosis and parakeratosis. These clinical and histological findings were consistent with the diagnosis of papular acantholytic dyskeratosis of the vulva which is a rare disorder, first described in 1984.",
"title": ""
},
{
"docid": "2f9de2e94c6af95e9c2e9eb294a7696c",
"text": "The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.",
"title": ""
},
{
"docid": "5bf90680117b7db4315cce18bc9aefa2",
"text": "Motivated by aiding human operators in the detection of dangerous objects in passenger luggage, such as in airports, we develop an automatic object detection approach for multi-view X-ray image data. We make three main contributions: First, we systematically analyze the appearance variations of objects in X-ray images from inspection systems. We then address these variations by adapting standard appearance-based object detection approaches to the specifics of dual-energy X-ray data and the inspection scenario itself. To that end we reduce projection distortions, extend the feature representation, and address both in-plane and out-of-plane object rotations, which are a key challenge compared to many detection tasks in photographic images. Finally, we propose a novel multi-view (multi-camera) detection approach that combines single-view detections from multiple views and takes advantage of the mutual reinforcement of geometrically consistent hypotheses. While our multi-view approach can be used atop arbitrary single-view detectors, thus also for multi-camera detection in photographic images, we evaluate our method on detecting handguns in carry-on luggage. Our results show significant performance gains from all components.",
"title": ""
},
{
"docid": "047112c682f64fc6a272a7e80d5f1a1b",
"text": "In this paper, we study an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes. We find that community embedding is not only useful for community-level applications such as graph visualization, but also beneficial to both community detection and node classification. To learn such embedding, our insight hinges upon a closed loop among community embedding, community detection and node embedding. On the one hand, node embedding can help improve community detection, which outputs good communities for fitting better community embedding. On the other hand, community embedding can be used to optimize the node embedding by introducing a community-aware high-order proximity. Guided by this insight, we propose a novel community embedding framework that jointly solves the three tasks together. We evaluate such a framework on multiple real-world datasets, and show that it improves graph visualization and outperforms state-of-the-art baselines in various application tasks, e.g., community detection and node classification.",
"title": ""
},
{
"docid": "b279c6d3e71e544d4e99152d968d0a83",
"text": "Registration of 3D point clouds is a problem that arises in a variety of research areas such as computer vision, computer graphics and computational geometry. This situation causes most papers in the area to focus on solving practical problems by using data structures often developed in theoretical contexts. Consequently, discrepancies arise between asymptotic cost and experimental performance. The point cloud registration or matching problem encompasses many different steps. Among them, the computation of the distance between two point sets (often refereed to as residue computation) is crucial and can be seen as an aggregate of range searching or nearest neighbor searching. In this paper, we aim at providing theoretical analysis and experimental performance of range searching and nearest neighbor data structures applied to 3D point cloud registration. Performance of widely used data structures such as compressed octrees, KDtrees, BDtrees and regular grids is reported. Additionally, we present a new hybrid data structure named GridDS, which combines a regular grid with some preexisting “inner” data structure in order to meet the best asymptotic bounds while also obtaining the best performance. The experimental evaluation in both synthetic and real data demonstrates that the hybrid data structures built using GridDS improve the running times of the single data structures. Thus, as we have studied the performances of the state-of-the-art techniques managing to improve their respective running times thanks to GridDS, this paper presents the best running time for point cloud residue computation up to date.",
"title": ""
},
{
"docid": "10e88f0d1a339c424f7e0b8fa5b43c1e",
"text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date",
"title": ""
},
{
"docid": "d3e51c3f9ece671cf5e8e1f630c83a8c",
"text": "Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning’s great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure—from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications in the real world as examples to motivate future studies.",
"title": ""
},
{
"docid": "b129436efcd5e939c9fde092f4eb1a80",
"text": "NEC’s video identification technology enables instant video content identification by extracting a unique descriptor called a “video signature” from video content. This technology is approved as the MPEG-7 Video Signature Tool; an international standard of interoperable descriptors used for video identification. The video signature is an extremely robust tool for identifying videos with alternations and editing effects. It facilitates search of very short video scenes. It also has a compact design, making ultrafast searches possible via a compact system. In this paper, we propose video identification solutions for the mass media industries. They adopt video signatures as metadata descriptions to enable efficient video registration operations and the visualization of video content relationships.",
"title": ""
},
{
"docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522",
"text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "ab5cf1d4c03dea07a46587b73235387c",
"text": "Image is usually taken for expressing some kinds of emotions or purposes, such as love, celebrating Christmas. There is another better way that combines the image and relevant song to amplify the expression, which has drawn much attention in the social network recently. Hence, the automatic selection of songs should be expected. In this paper, we propose to retrieve semantic relevant songs just by an image query, which is named as the image2song problem. Motivated by the requirements of establishing correlation in semantic/content, we build a semantic-based song retrieval framework, which learns the correlation between image content and lyric words. This model uses a convolutional neural network to generate rich tags from image regions, a recurrent neural network to model lyric, and then establishes correlation via a multi-layer perceptron. To reduce the content gap between image and lyric, we propose to make the lyric modeling focus on the main image content via a tag attention. We collect a dataset from the social-sharing multimodal data to study the proposed problem, which consists of (image, music clip, lyric) triplets. We demonstrate that our proposed model shows noticeable results in the image2song retrieval task and provides suitable songs. Besides, the song2image task is also performed.",
"title": ""
},
{
"docid": "eb8f0a30d222b89e5fda3ea1d83ea525",
"text": "We present a method which exploits automatically generated scientific discourse annotations to create a content model for the summarisation of scientific articles. Full papers are first automatically annotated using the CoreSC scheme, which captures 11 contentbased concepts such as Hypothesis, Result, Conclusion etc at the sentence level. A content model which follows the sequence of CoreSC categories observed in abstracts is used to provide the skeleton of the summary, making a distinction between dependent and independent categories. Summary creation is also guided by the distribution of CoreSC categories found in the full articles, in order to adequately represent the article content. Finally, we demonstrate the usefulness of the summaries by evaluating them in a complex question answering task. Results are very encouraging as summaries of papers from automatically obtained CoreSCs enable experts to answer 66% of complex content-related questions designed on the basis of paper abstracts. The questions were answered with a precision of 75%, where the upper bound for human summaries (abstracts) was 95%.",
"title": ""
},
{
"docid": "9dbea5d01d446bd829085e445f11c5a7",
"text": "We present the results of a large-scale, end-to-end human evaluation of various sentiment summarization models. The evaluation shows that users have a strong preference for summarizers that model sentiment over non-sentiment baselines, but have no broad overall preference between any of the sentiment-based models. However, an analysis of the human judgments suggests that there are identifiable situations where one summarizer is generally preferred over the others. We exploit this fact to build a new summarizer by training a ranking SVM model over the set of human preference judgments that were collected during the evaluation, which results in a 30% relative reduction in error over the previous best summarizer.",
"title": ""
},
{
"docid": "b0ea0b7e3900b440cb4e1d5162c6830b",
"text": "Product Lifecycle Management (PLM) solutions have been serving as the basis for collaborative product definition, manufacturing, and service management in many industries. They capture and provide access to product and process information and preserve integrity of information throughout the lifecycle of a product. Efficient growth in the role of Building Information Modeling (BIM) can benefit vastly from unifying solutions to acquire, manage and make use of information and processes from various project and enterprise level systems, selectively adapting functionality from PLM systems. However, there are important differences between PLM’s target industries and the Architecture, Engineering, and Construction (AEC) industry characteristics that require modification and tailoring of some aspects of current PLM technology. In this study we examine the fundamental PLM functionalities that create synergy with the BIM-enabled AEC industry. We propose a conceptual model for the information flow and integration between BIM and PLM systems. Finally, we explore the differences between the AEC industry and traditional scope of service for PLM solutions.",
"title": ""
},
{
"docid": "ac3511f0a3307875dc49c26da86afcfb",
"text": "With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach.",
"title": ""
},
{
"docid": "aec48ddea7f21cabb9648eec07c31dcd",
"text": "High voltage Marx generator implementation using IGBT (Insulated Gate Bipolar Transistor) stacks is proposed in this paper. To protect the Marx generator at the moment of breakdown, AOCP (Active Over-Current Protection) part is included. The Marx generator is composed of 12 stages and each stage is made of IGBT stacks, two diode stacks, and capacitors. IGBT stack is used as a single switch. Diode stacks and inductors are used to charge the high voltage capacitor at each stage without power loss. These are also used to isolate input and high voltage negative output in high voltage generation mode. The proposed Marx generator implementation uses IGBT stack with a simple driver and has modular design. This system structure gives compactness and easiness to implement the total system. Some experimental and simulated results are included to verify the system performances in this paper.",
"title": ""
},
{
"docid": "e4ea761d48fafeeea1f143833d7362fe",
"text": "This paper proposes a novel approach to help computing system administrators in monitoring the security of their systems. This approach is based on modeling the system as a privilege graph exhibiting operational security vulnerabilities and on transforming this privilege graph into a Markov chain corresponding to all possible successful attack scenarios. A set of tools has been developed to generate automatically the privilege graph of a Unix system, to transform it into the corresponding Markov chain and to compute characteristic measures of the operational system security.",
"title": ""
}
] |
scidocsrr
|
b463b1233cb52eded2ff6a2f2dc17dd7
|
Particle Swarm Optimization: Technique, System and Challenges
|
[
{
"docid": "624e78153b58a69917d313989b72e6bf",
"text": "In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle Swarm Optimization (TV-MOPSO). TV-MOPSO is made adaptive in nature by allowing its vital parameters (viz., inertia weight and acceleration coefficients) to change with iterations. This adaptiveness helps the algorithm to explore the search space more efficiently. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front. TV-MOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for 11 function optimization problems, using different performance measures. 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "c8e5257c2ed0023dc10786a3071c6e6a",
"text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"title": ""
},
{
"docid": "2679d4bdb1aff322a7ec85d9712abfc7",
"text": "The multiscale second order local structure of an image (Hessian) is examined with the purpose of developing a vessel enhancement filter. A vesselness measure is obtained on the basis of all eigenvalues of the Hessian. This measure is tested on two dimensional DSA and three dimensional aortoiliac and cerebral MRA data. Its clinical utility is shown by the simultaneous noise and background suppression and vessel enhancement in maximum intensity projections and volumetric displays.",
"title": ""
},
{
"docid": "3091211c95851b090187d96c2971d720",
"text": "the dot-coms in the late 1990s, this AI boom was characterized by unrealistic expectations. When the boom went bust, the field fell into a trough of disillusionment that Americans call the AI Winter. A similar disillusionment had already struck earlier, elsewhere (see the “Comments on the Lighthill Report” sidebar). If a technology has something to offer, it won’t stay in the trough of disillusionment, just as AI has risen to a new sustainable level of activity. For example, Figure 2 shows that although AI conference attendance numbers have been stable since 1995, they are nowhere near the unsustainable peak of the mid-1980s. With this special issue, I wanted to celebrate and record modern AI’s achievements and activity. Hence, the call for papers asked for AI’s current trends and historical successes. But the best-laid plans can go awry. It turns out that my “coming of age” special issue was about five to 10 years too late. AI is no longer a bleeding-edge technology—hyped by its proponents and mistrusted by the mainstream. In the 21st century, AI is not necessarily amazing. Rather, it’s often routine.",
"title": ""
},
{
"docid": "8231e10912b42e0f8ac90392e6e0efbb",
"text": "Zobrist Hashing: An Efficient Work Distribution Method for Parallel Best-First Search Yuu Jinnai, Alex Fukunaga VIS: Text and Vision Oral Presentations 1326 SentiCap: Generating Image Descriptions with Sentiments Alexander Patrick Mathews, Lexing Xie, Xuming He 1950 Reading Scene Text in Deep Convolutional Sequences Pan He, Weilin Huang, Yu Qiao, Chen Change Loy, Xiaoou Tang 1247 Creating Images by Learning Image Semantics Using Vector Space Models Derrall Heath, Dan Ventura Poster Spotlight Talks 655 Towards Domain Adaptive Vehicle Detection in Satellite Image by Supervised SuperResolution Transfer Liujuan Cao, Rongrong Ji, Cheng Wang, Jonathan Li 499 Transductive Zero-Shot Recognition via Shared Model Space Learning Yuchen Guo, Guiguang Ding, Xiaoming Jin, Jianmin Wang 1255 Exploiting View-Specific Appearance Similarities Across Classes for Zero-shot Pose Prediction: A Metric Learning Approach Alina Kuznetsova, Sung Ju Hwang, Bodo Rosenhahn, Leonid Sigal NLP: Topic Flow Oral Presentations 744 Topical Analysis of Interactions Between News and Social Media Ting Hua, Yue Ning, Feng Chen, Chang-Tien Lu, Naren Ramakrishnan 1561 Tracking Idea Flows between Social Groups Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, Yangqiu Song 1201 Modeling Evolving Relationships Between Characters in Literary Novels Snigdha Chaturvedi, Shashank Srivastava, Hal Daume III, Chris Dyer Poster Spotlight Talks 405 Identifying Search",
"title": ""
},
{
"docid": "c58cfb643a35033d59fe50c89fe1d445",
"text": "This survey of denial-of-service threats and countermeasures considers wireless sensor platforms' resource constraints as well as the denial-of-sleep attack, which targets a battery-powered device's energy supply. Here, we update the survey of denial-of-service threats with current threats and countermeasures.In particular, we more thoroughly explore the denial-of-sleep attack, which specifically targets the energy-efficient protocols unique to sensor network deployments. We start by exploring such networks' characteristics and then discuss how researchers have adapted general security mechanisms to account for these characteristics.",
"title": ""
},
{
"docid": "e73de1e6f191fef625f75808d7fbfbb1",
"text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.",
"title": ""
},
{
"docid": "6adb6b6a60177ed445a300a2a02c30b9",
"text": "Barrier coverage is a critical issue in wireless sensor networks for security applications (e.g., border protection) where directional sensors (e.g., cameras) are becoming more popular than omni-directional scalar sensors (e.g., microphones). However, barrier coverage cannot be guaranteed after initial random deployment of sensors, especially for directional sensors with limited sensing angles. In this paper, we study how to efficiently use mobile sensors to achieve \\(k\\) -barrier coverage. In particular, two problems are studied under two scenarios. First, when only the stationary sensors have been deployed, what is the minimum number of mobile sensors required to form \\(k\\) -barrier coverage? Second, when both the stationary and mobile sensors have been pre-deployed, what is the maximum number of barriers that could be formed? To solve these problems, we introduce a novel concept of weighted barrier graph (WBG) and prove that determining the minimum number of mobile sensors required to form \\(k\\) -barrier coverage is related with finding \\(k\\) vertex-disjoint paths with the minimum total length on the WBG. With this observation, we propose an optimal solution and a greedy solution for each of the two problems. Both analytical and experimental studies demonstrate the effectiveness of the proposed algorithms.",
"title": ""
},
{
"docid": "829064562b2070d532b3bf108adb0ea2",
"text": "The design of power semiconductor chips has always involved a trade-off between switching speed, static losses, safe operating area and short-circuit withstanding capability. This paper presents an optimized structure for 1200 V IGBTs from the viewpoint of all-round performance. The new device is based on a novel wide cell pitch carrier stored trench bipolar transistor (CSTBT). Unlike conventional trench gate IGBTs, this structure simultaneously achieves both low on-state voltage and the rugged short-circuit capability desired for industrial applications.",
"title": ""
},
{
"docid": "2f0c2f19a8ad34d9335fff1515af2a65",
"text": "In this paper, we present a system to detect symbols on roads (e.g. arrows, speed limits, bus lanes and other pictograms) with a common monoscopic or stereoscopic camera system. No manual labeling of images is necessary since the exact definitions of the symbols in the legal instructions for road paintings are used. With those vector graphics an Optical Character Recognition (OCR) System is trained. If only a monoscopic camera is used, the vanishing point is estimated and an inverse perspective transformation is applied to obtain a distortion free top-view. In case of the stereoscopic camera setup, the 3D reconstruction is projected to a ground plane. TESSERACT, a common OCR system is used to classify the symbols. If odometry or position information is available, a spatial filtering and mapping is possible. The obtained information can be used on one side to improve localization, on the other side to provide further information for planning or generation of planning maps.",
"title": ""
},
{
"docid": "975d1b5edfc68e8041794db9cc50d0d2",
"text": "I’ve taken to writing this series of posts on a statistical view of deep learning with two principal motivations in mind. The first was as a personal exercise to make concrete and to test the limits of the way that I think about and use deep learning in my every day work. The second, was to highlight important statistical connections and implications of deep learning that I have not seen made in the popular courses, reviews and books on deep learning, but which are extremely important to keep in mind. This document forms a collection of these essays originally posted at blog.shakirm.com.",
"title": ""
},
{
"docid": "ccfb258fa88118aedbba5fa803808f75",
"text": "Face detection has been well studied for many years and one of remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel contextassisted single shot face detector, named PyramidBox to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel context anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level context semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a contextsensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversity of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art over the two common face detection benchmarks, FDDB and WIDER FACE. Our code is available in PaddlePaddle: https://github.com/PaddlePaddle/models/tree/develop/ fluid/face_detection.",
"title": ""
},
{
"docid": "f7bdf07ef7a45c3e261e4631743c1882",
"text": "Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sampleefficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actorcritic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sampleefficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learning deep RLbased dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.",
"title": ""
},
{
"docid": "8517f1309ab9ae9bc6d19bdaca54891b",
"text": "Air pollution may cause many severe diseases. An efficient air quality monitoring system is of great benefit for human health and air pollution control. In this paper, we study image-based air quality analysis, in particular, the concentration estimation of particulate matter with diameters less than 2.5 micrometers (PM2.5). The proposed method uses a deep Convolutional Neural Network (CNN) to classify natural images into different categories based on their PM2.5 concentrations. In order to evaluate the proposed method, we created a dataset that contains total 591 images taken in Beijing with corresponding PM2.5 concentrations. The experimental results demonstrate that our method are valid for image-based PM2.5 concentration estimation.",
"title": ""
},
{
"docid": "6c5a5bc775316efc278285d96107ddc6",
"text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.",
"title": ""
},
{
"docid": "411d3048bd13f48f0c31259c41ff2903",
"text": "In computer vision, object detection is addressed as one of the most challenging problems as it is prone to localization and classification error. The current best-performing detectors are based on the technique of finding region proposals in order to localize objects. Despite having very good performance, these techniques are computationally expensive due to having large number of proposed regions. In this paper, we develop a high-confidence region-based object detection framework that boosts up the classification performance with less computational burden. In order to formulate our framework, we consider a deep network that activates the semantically meaningful regions in order to localize objects. These activated regions are used as input to a convolutional neural network (CNN) to extract deep features. With these features, we train a set of class-specific binary classifiers to predict the object labels. Our new region-based detection technique significantly reduces the computational complexity and improves the performance in object detection. We perform rigorous experiments on PASCAL, SUN, MIT-67 Indoor and MSRC datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in recognizing objects.",
"title": ""
},
{
"docid": "feb184ada1d0deb3c1798beb3da8ff53",
"text": "Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D scene flow from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.",
"title": ""
},
{
"docid": "74b163a2c2f149dce9850c6ff5d7f1f6",
"text": "The vast majority of cutaneous canine nonepitheliotropic lymphomas are of T cell origin. Nonepithelial Bcell lymphomas are extremely rare. The present case report describes a 10-year-old male Golden retriever that was presented with slowly progressive nodular skin lesions on the trunk and limbs. Histopathology of skin biopsies revealed small periadnexal dermal nodules composed of rather pleomorphic round cells with round or contorted nuclei. The diagnosis of nonepitheliotropic cutaneous B-cell lymphoma was based on histopathological morphology and case follow-up, and was supported immunohistochemically by CD79a positivity.",
"title": ""
},
{
"docid": "a691214a7ac8a1a7b4ad6fe833afd572",
"text": "Within the field of computer vision, change detection algorithms aim at automatically detecting significant changes occurring in a scene by analyzing the sequence of frames in a video stream. In this paper we investigate how state-of-the-art change detection algorithms can be combined and used to create a more robust algorithm leveraging their individual peculiarities. We exploited genetic programming (GP) to automatically select the best algorithms, combine them in different ways, and perform the most suitable post-processing operations on the outputs of the algorithms. In particular, algorithms’ combination and post-processing operations are achieved with unary, binary and ${n}$ -ary functions embedded into the GP framework. Using different experimental settings for combining existing algorithms we obtained different GP solutions that we termed In Unity There Is Strength. These solutions are then compared against state-of-the-art change detection algorithms on the video sequences and ground truth annotations of the ChangeDetection.net 2014 challenge. Results demonstrate that using GP, our solutions are able to outperform all the considered single state-of-the-art change detection algorithms, as well as other combination strategies. The performance of our algorithm are significantly different from those of the other state-of-the-art algorithms. This fact is supported by the statistical significance analysis conducted with the Friedman test and Wilcoxon rank sum post-hoc tests.",
"title": ""
},
{
"docid": "db1f603628301600b8ae792b1fcc0c27",
"text": "The authors propose a robust clustering method based on the Least Median of Squares principle. This is a general-purpose method, and is efficient for finding a majority structure using dictionary sorting and a smallest square region enclosing for a set of data points. In addition, the authors propose a robust line fitting algorithm using this method. The fitting problem is solved through a dual transform of a pair of points selected from data space to a parameter space, and then clustering the mapped points. Moreover, a function to identify outliers based on the size of the square region converging in the parameter space is defined, and is used to distinguish between normal values and outliers. The proposed method enables robust line fitting for data points found in images or in files. The resulting line statistically satisfies the least median condition. Furthermore, line fitting to groups of points with a high outlier percentage is also possible. Here, outliers and normal values are removed in sequence. The validity of the proposed method has been shown based on the results of simulation experiments. © 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(14): 92–100, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.1225",
"title": ""
},
{
"docid": "e82e33273727eaa5681b4d79ded2d0ba",
"text": "In previous work the author has introduced a lambda calculus SLR with modal and linear types which serves as an extension of Bellantoni-Cook's function algebra BC to higher types. It is a step towards a functional programming language in which all programs run in polynomial time. While this previous work was concerned with the syntactic metatheory of SLR in this paper we develop a semantics of SLR in terms of Chu spaces over a certain category of sheaves from which it follows that all expressible functions are indeed in PTIME. We notice a similarity between the Chu space interpretation and CPS translation which as we hope will have further applications in functional programming.",
"title": ""
}
] |
scidocsrr
|
d4de40334b59b8f5e120336e5247c5ad
|
Churn in Social Networks
|
[
{
"docid": "ac41c57bcb533ab5dabcc733dd69a705",
"text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.",
"title": ""
}
] |
[
{
"docid": "f4d91436f57fdc97e9c47a8c291dc1ee",
"text": "The simulation, verification and realization of complex control arithmetic are introduced in detail. A compound Fuzzy PID level control system is realized on Siemens PROFINET platform. The communication method between supervision configuration software WinCC 6.0 and engineering calculating software MATLAB through OPC technology is introduced in this paper. Hardware configuration and programming are completed by STEP7 provided by Siemens. Compound Fuzzy PID arithmetic is realized by MATLAB. The programming and calculating of complex arithmetic in control system is simplified consumedly. The method could be broadly applied in practice systems for its simple programming, easy realization and better performance criteria.",
"title": ""
},
{
"docid": "b87f7587821f4a8718396a1dd7fa479e",
"text": "In the future, robots will be important device widely in our daily lives to achieve complicated tasks. To achieve the tasks, there are some demands for the robots. In this paper, two strong demands of them are taken attention. First one is multiple-degrees of freedom (DOF), and the second one is miniaturization of the robots. Although rotary actuators is necessary to get multiple-DOF, miniaturization is difficult with rotary motors which are usually utilized for multiple-DOF robots. Here, tendon-driven rotary actuator is a candidate to solve the problems of the rotary actuators. The authors proposed a type of tendon-driven rotary actuator using thrust wires. However, big mechanical loss and frictional loss occurred because of the complicated structure of connection points. As the solution for the problems, this paper proposes a tendon-driven rotary actuator for haptics with thrust wires and polyethylene (PE) line. In the proposed rotary actuator, a PE line is used in order to connect the tip points of thrust wires and the end effector. The validity of the proposed rotary actuator is evaluated by experiments.",
"title": ""
},
{
"docid": "d035c972226a97ec3985cd76bf9afc8c",
"text": "Surfactants and their mixtures can drastically change the interfacial properties and hence are used in many industrial processes such as dispersion/flocculation, flotation, emulsification, corrosion inhibition, cosmetics, drug delivery, chemical mechanical polishing, enhanced oil recovery, and nanolithography. A review of studies on adsorption of single surfactant as well as mixtures of various types (anionic-cationic, anionic-nonionic, cationic-nonionic, cationic-zwitterionic and nonionic-nonionic) is presented here along with mechanisms involved. Results obtained using techniques such as zeta potential, flotation, AFM, specular neutron reflectivity, small angle neutron scattering, fluorescence, ESR, Raman spectroscopy, ellipsometry, HPLC and ATR-IR are reviewed along with those from traditional techniques to elucidate the mechanisms of adsorption and particularly to understand synergistic/antagonistic interactions at solution/liquid interfaces and nanostructures of surface aggregates. In addition, adsorption of several mixed surfactant systems is considered due to their industrial relevance. Finally an attempt is made to derive structure-property relationships to provide a solid foundation for the design and use of surfactant formulations for industrial applications.",
"title": ""
},
{
"docid": "33e3e5aad64af3f0c2ae665988e7ff9d",
"text": "Developing wireless nanodevices and nanosystems are of critical importance for sensing, medical science, defense technology, and even personal electronics. It is highly desirable for wireless devices and even required for implanted biomedical devices that they be self-powered without use of a battery. It is essential to explore innovative nanotechnologies for converting mechanical energy (such as body movement, muscle stretching), vibrational energy (such as acoustic or ultrasonic waves), and hydraulic energy (such as body fluid flow) into electrical energy, which will be used to power nanodevices without a battery. This is a key step towards self-powered nanosystems. We have demonstrated an innovative approach for converting mechanical energy into electrical energy by piezoelectric zinc oxide nanowire (NW) arrays. The operation mechanism of the electric generator relies on the unique coupling of the piezoelectric and semiconducting properties of ZnO as well as the gating effect of the Schottky barrier formed between the metal tip and the NW. Based on this mechanism, we have recently developed a DC nanogenerator (NG) driven by the ultrasonic wave in a biofluid and a textile-fiber-based NG for harvesting low-frequency mechanical energy. Furthermore, a new field, ‘‘nanopiezotronics’’, has been developed, which uses coupled piezoelectric–semiconducting properties for fabricating novel and unique electronic devices and components. This Feature Article gives a systematic description of the fundamental mechanism of the NG, its rationally innovative design for high output power, and the new electronics that can be built based on a piezoelectric-driven semiconducting process. A perspective will be given about the future impact of the technologies.",
"title": ""
},
{
"docid": "906b6d1ddac67f9303ce86117b88edf2",
"text": "Over the years, we have harnessed the power of computing to improve the speed of operations and increase in productivity. Also, we have witnessed the merging of computing and telecommunications. This excellent combination of two important fields has propelled our capability even further, allowing us to communicate anytime and anywhere, improving our work flow and increasing our quality of life tremendously. The next wave of evolution we foresee is the convergence of telecommunication, computing, wireless, and transportation technologies. Once this happens, our roads and highways will be both our communications and transportation platforms, which will completely revolutionize when and how we access services and entertainment, how we communicate, commute, navigate, etc., in the coming future. This paper presents an overview of the current state-of-the-art, discusses current projects, their goals, and finally highlights how emergency services and road safety will evolve with the blending of vehicular communication networks with road transportation.",
"title": ""
},
{
"docid": "81e3ff54c7cd97d90108f3a0c838273d",
"text": "Time-of-flight (TOF) cameras are sensors that can measure the depths of scene points, by illuminating the scene with a controlled laser or LED source and then analyzing the reflected light. In this paper, we will first describe the underlying measurement principles of time-of-flight cameras, including: (1) pulsed-light cameras, which measure directly the time taken for a light pulse to travel from the device to the object and back again, and (2) continuous-wave-modulated light cameras, which measure the phase difference between the emitted and received signals, and hence obtain the travel time indirectly. We review the main existing designs, including prototypes as well as commercially available devices. We also review the relevant camera calibration principles, and how they are applied to TOF devices. Finally, we discuss the benefits and challenges of combined TOF and color camera systems.",
"title": ""
},
{
"docid": "64c2b9f59a77f03e6633e5804356e9fc",
"text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.",
"title": ""
},
{
"docid": "b9779b478ee8714d5b0f6ce3e0857c9f",
"text": "Sensor-based motion recognition integrates the emerging area of wearable sensors with novel machine learning techniques to make sense of low-level sensor data and provide rich contextual information in a real-life application. Although Human Activity Recognition (HAR) problem has been drawing the attention of researchers, it is still a subject of much debate due to the diverse nature of human activities and their tracking methods. Finding the best predictive model in this problem while considering different sources of heterogeneities can be very difficult to analyze theoretically, which stresses the need of an experimental study. Therefore, in this paper, we first create the most complete dataset, focusing on accelerometer sensors, with various sources of heterogeneities. We then conduct an extensive analysis on feature representations and classification techniques (the most comprehensive comparison yet with 293 classifiers) for activity recognition. Principal component analysis is applied to reduce the feature vector dimension while keeping essential information. The average classification accuracy of eight sensor positions is reported to be 96.44% ± 1.62% with 10-fold evaluation, whereas accuracy of 79.92% ± 9.68% is reached in the subject-independent evaluation. This study presents significant evidence that we can build predictive models for HAR problem under more realistic conditions, and still achieve highly accurate results.",
"title": ""
},
{
"docid": "09a8aee1ff3315562c73e5176a870c37",
"text": "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition.",
"title": ""
},
{
"docid": "df4883ac490f3a27b2dbc310867a3534",
"text": "We present OpenLambda, a new, open-source platform for building next-generation web services and applications in the burgeoning model of serverless computation. We describe the key aspects of serverless computation, and present numerous research challenges that must be addressed in the design and implementation of such systems. We also include a brief study of current web applications, so as to better motivate some aspects of serverless application construction.",
"title": ""
},
{
"docid": "efd2677054bbf5cd32248cdd02b7b7cd",
"text": "\"Where do we go from here?\" is the underlying question regarding the future (perhaps foreseeable) developments in computational chemistry. Although this young discipline has already permeated practically all of chemistry, it is likely to become even more powerful with the rapid development of computational hard- and software.",
"title": ""
},
{
"docid": "49e786f66641194a22bf488c5e97ed7f",
"text": "The non-negative matrix factorization (NMF) determines a lower rank approximation of a matrix where an interger \"!$# is given and nonnegativity is imposed on all components of the factors % & (' and % )'* ( . The NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a more relevant interpretation of the results since it gives non-subtractive combinations of non-negative basis vectors. In this paper, we introduce an algorithm for the NMF based on alternating non-negativity constrained least squares (NMF/ANLS) and the active set based fast algorithm for non-negativity constrained least squares with multiple right hand side vectors, and discuss its convergence properties and a rigorous convergence criterion based on the Karush-Kuhn-Tucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by +-, -norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior.",
"title": ""
},
{
"docid": "40cea15a4fbe7f939a490ea6b6c9a76a",
"text": "An application provider leases resources (i.e., virtual machine instances) of variable configurations from a IaaS provider over some lease duration (typically one hour). The application provider (i.e., consumer) would like to minimize their cost while meeting all service level obligations (SLOs). The mechanism of adding and removing resources at runtime is referred to as autoscaling. The process of autoscaling is automated through the use of a management component referred to as an autoscaler. This paper introduces a novel autoscaling approach in which both cloud and application dynamics are modeled in the context of a stochastic, model predictive control problem. The approach exploits trade-off between satisfying performance related objectives for the consumer's application while minimizing their cost. Simulation results are presented demonstrating the efficacy of this new approach.",
"title": ""
},
{
"docid": "1b63892646f19b189b43a67c6f7c3af6",
"text": "The US Environmental Protection Agency Resource Conservation website begins: \"Natural resource and energy conservation is achieved by managing materials more efficiently--reduce, reuse, recycle,\" yet healthcare agencies have been slow to heed and practice this simple message. In dialysis practice, notable for a recurrent, per capita resource consumption and waste generation profile second to none in healthcare, efforts to: (1) minimize water use and wastage; (2) consider strategies to reduce power consumption and/or use alternative power options; (3) develop optimal waste management and reusable material recycling programs; (4) design smart buildings that work with and for their environment; (5) establish research programs that explore environmental practice; all have been largely ignored by mainstream nephrology. Some countries are doing far better than others. In the United Kingdom and some European jurisdictions, exceptional recent progress has been made to develop, adopt, and coordinate eco-practice within dialysis programs. These programs set an example for others to follow. Elsewhere, progress has been piecemeal, at best. This review explores the current extent of \"green\" or eco-dialysis practices. While noting where progress has been made, it also suggests potential new research avenues to develop and follow. One thing seems certain: as global efforts to combat climate change and carbon generation accelerate, the environmental impact of dialysis practice will come under increasing regulatory focus. It is far preferable for the sector to take proactive steps, rather than to await the heavy hand of government or administration to force reluctant and costly compliance on the un-prepared.",
"title": ""
},
{
"docid": "c74bbe9cbf34e841c04830f34e12e141",
"text": "Feature extraction and encoding represent two of the most crucial steps in an action recognition system. For building a powerful action recognition pipeline it is important that both steps are efficient and in the same time provide reliable performance. This work proposes a new approach for feature extraction and encoding that allows us to obtain real-time frame rate processing for an action recognition system. The motion information represents an important source of information within the video. The common approach to extract the motion information is to compute the optical flow. However, the estimation of optical flow is very demanding in terms of computational cost, in many cases being the most significant processing step within the overall pipeline of the target video analysis application. In this work we propose an efficient approach to capture the motion information within the video. Our proposed descriptor, Histograms of Motion Gradients (HMG), is based on a simple temporal and spatial derivation, which captures the changes between two consecutive frames. For the encoding step a widely adopted method is the Vector of Locally Aggregated Descriptors (VLAD), which is an efficient encoding method, however, it considers only the difference between local descriptors and their centroids. In this work we propose Shape Difference VLAD (SD-VLAD), an encoding method which brings complementary information by using the shape information within the encoding process. We validated our proposed pipeline for action recognition on three challenging datasets UCF50, UCF101 and HMDB51, and we propose also a real-time framework for action recognition.",
"title": ""
},
{
"docid": "0cae8939c57ff3713d7321102c80816e",
"text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.",
"title": ""
},
{
"docid": "205e03f589758316987e3eaacee13430",
"text": "Motivated by the technology evolutions and the corresponding changes in user-consumer behavioral patterns, this study applies a Location Based Services (LBS) environmental determinants’ integrated theoretical framework by investigating its role on classifying, profiling and predicting user-consumer behavior. For that purpose, a laboratory LBS application was developed and tested with 110 subjects within the context of a field trial setting in the entertainment industry. Users are clustered into two main types having the “physical” and the “social density” determinants to best discriminate between the resulting clusters. Also, the two clusters differ in terms of their spatial and verbal ability and attitude towards the LBS environment. Similarly, attitude is predicted by the “location”, the “device” and the “mobile connection” LBS environmental determinants for the “walkers in place” (cluster #1) and by all LBS environmental determinants (i.e. those determinants of cluster #1 plus the “digital” and the “social environment” ones) for the “walkers in space” (cluster #2). Finally, the attitude of both clusters’ participants towards the LBS environment affects their behavioral intentions towards using LBS applications, with limited, however, predicting power observed in this relationship.",
"title": ""
},
{
"docid": "52fe696242f399d830d0a675bd766128",
"text": "Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an \"intentional stance\" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a \"teleological stance\" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.",
"title": ""
},
{
"docid": "5f49c93d7007f0f14f1410ce7805b29a",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
},
{
"docid": "4f74d7e1d7d8a98f0228e0c87c0d85d8",
"text": "This paper proposes a novel method for multivehicle detection and tracking using a vehicle-mounted monocular camera. In the proposed method, the features of vehicles are learned as a deformable object model through the combination of a latent support vector machine (LSVM) and histograms of oriented gradients (HOGs). The detection algorithm combines both global and local features of the vehicle as a deformable object model. Detected vehicles are tracked through a particle filter, which estimates the particles' likelihood by using a detection scores map and template compatibility for both root and parts of the vehicle while considering the deformation cost caused by the movement of vehicle parts. Tracking likelihoods are iteratively used as a priori probability to generate vehicle hypothesis regions and update the detection threshold to reduce false negatives of the algorithm presented before. Extensive experiments in urban scenarios showed that the proposed method can achieve an average vehicle detection rate of 97% and an average vehicle-tracking rate of 86% with a false positive rate of less than 0.26%.",
"title": ""
}
] |
scidocsrr
|
e5d8d4121eb4ed0b5c837b62215693aa
|
Generating Paraphrases from DBPedia using Deep Learning
|
[
{
"docid": "c879ee3945592f2e39bb3306602bb46a",
"text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.",
"title": ""
},
{
"docid": "106e6eb9bfd9cf4f64487270901093f0",
"text": "Neural Machine Translation (NMT) has recently attracted a l ot of attention due to the very high performance achieved by deep neural network s in other domains. An inherent weakness in existing NMT systems is their inabil ity to correctly translate rare words: end-to-end NMTs tend to have relatively sma ll vocabularies with a single “unknown-word” symbol representing every possibl e out-of-vocabulary (OOV) word. In this paper, we propose and implement a simple t echnique to address this problem. We train an NMT system on data that is augm ented by the output of a word alignment algorithm, allowing the NMT syste m to output, for each OOV word in the target sentence, its corresponding word in the source sentence. This information is later utilized in a post-process ing step that translates every OOV word using a dictionary. Our experiments on the WMT ’14 English to French translation task show that this simple method prov ides a substantial improvement over an equivalent NMT system that does not use thi technique. The performance of our system achieves a BLEU score of 36.9, whic h improves the previous best end-to-end NMT by 2.1 points. Our model matche s t performance of the state-of-the-art system while using three times less data.",
"title": ""
}
] |
[
{
"docid": "afa3fa35061b54c1ca662f0885b2e4be",
"text": "This paper discusses an analytical study that quantifies the expected earthquake-induced losses in typical office steel frame buildings designed with perimeter special moment frames in highly seismic regions. It is shown that for seismic events associated with low probabilities of occurrence, losses due to demolition and collapse may be significantly overestimated when the expected loss computations are based on analytical models that ignore the composite beam effects and the interior gravity framing system of a steel frame building. For frequently occurring seismic events building losses are dominated by non-structural content repairs. In this case, the choice of the analytical model representation of the steel frame building becomes less important. Losses due to demolition and collapse in steel frame buildings with special moment frames designed with strong-column/weak-beam ratio larger than 2.0 are reduced by a factor of two compared with those in the same frames designed with a strong-column/weak-beam ratio larger than 1.0 as recommended in ANSI/AISC-341-10. The expected annual losses (EALs) of steel frame buildings with SMFs vary from 0.38% to 0.74% over the building life expectancy. The EALs are dominated by repairs of accelerationsensitive non-structural content followed by repairs of drift-sensitive non-structural components. It is found that the effect of strong-column/weak-beam ratio on EALs is negligible. This is not the case when the present value of life-cycle costs is selected as a loss-metric. It is advisable to employ a combination of loss-metrics to assess the earthquake-induced losses in steel frame buildings with special moment frames depending on the seismic performance level of interest. Copyright c © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "c9cb991b69b0759e16043cf7e22baa96",
"text": "An active frequency-modulated continuous wave (FMCW) terahertz (THz) system has been developed to image objects in millimeter-scale resolution in three dimensions. The two-dimensional (2-D) aperture synthesis enables the improvement of cross range resolution, and its high range resolution is achieved through the use of the broadband sweep signal, whose frequency ranges from 514 to 565 GHz. The 3-D data are sufficiently focused by the wavenumber domain approach derived for the dechirped data. We present the THz 3-D imaging strategy using synthetic aperture radar (SAR) techniques, the related theoretical background and the experimental results in this paper.",
"title": ""
},
{
"docid": "9afc04ce0ddde03789f4eaa4eab39e09",
"text": "In this paper we propose a novel method for recognizing human actions by exploiting a multi-layer representation based on a deep learning based architecture. A first level feature vector is extracted and then a high level representation is obtained by taking advantage of a Deep Belief Network trained using a Restricted Boltzmann Machine. The classification is finally performed by a feed-forward neural network. The main advantage behind the proposed approach lies in the fact that the high level representation is automatically built by the system exploiting the regularities in the dataset; given a suitably large dataset, it can be expected that such a representation can outperform a hand-design description scheme. The proposed approach has been tested on two standard datasets and the achieved results, compared with state of the art algorithms, confirm its effectiveness.",
"title": ""
},
{
"docid": "fb4630a6b558ac9b8d8444275e1978e3",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "22724325cdadd29a0d41498a44ab7aca",
"text": "INTRODUCTION: Traumatic loss of teeth in the esthetic zone commonly results in significant loss of buccal bone. This leads to reduced esthetics, problems with phonetics and reduction in function. Single tooth replacement has become an indication for implant-based restoration. In case of lack of bone volume the need of surgical reconstruction of the alveolar ridge is warranted. Several bone grafting techniques have been described to ensure sufficient bone volume for implantation. OBJECTIVES: Evaluation of using the zygomatic buttress as an intraoral bone harvesting donor site for pre-implant grafting. MATERIALS AND METHODS: Twelve patients were selected with limited alveolar ridge defect in the esthetic zone that needs bone grafting procedure prior to dental implants. Patients were treated using a 2-stage technique where bone blocks harvested from the zygomatic buttress region were placed as onlay grafts and fixed with osteosynthesis micro screws. After 4 months of healing, screws were removed for implant placement RESULTS: Harvesting of 12 bone blocks were performed for all patients indicating a success rate of 100% for the zygomatic buttress area as a donor site. Final rehabilitation with dental implants was possible in 11 of 12 patients, yielding a success rate of 91.6%. Three patients (25%) had postoperative complications at the donor site and one patient (8.3%) at the recipient site. The mean value of bone width pre-operatively was 3.64 ± .48 mm which increased to 5.47 ± .57 mm post-operatively, the increase in mean value of bone width was statistically significant (p < 0.001). CONCLUSIONS: Harvesting of intraoral bone blocks from the zygomatic buttress region is an effective and safe method to treat localized alveolar ridge defect before implant placement.",
"title": ""
},
{
"docid": "4d8732ade98ce350907cfa01823817d0",
"text": "The past years have seen a growing amount of research on question answering (QA) over Semantic Web data, shaping an interaction paradigm that allows end users to profit from the expressive power of Semantic Web standards while, at the same time, hiding their complexity behind an intuitive and easy-to-use interface. On the other hand, the growing amount of data has led to a heterogeneous data landscape where QA systems struggle to keep up with the volume, variety and veracity of the underlying knowledge. The Question Answering over Linked Data (QALD) challenge aims at providing an up-to-date benchmark for assessing and comparing state-of-the-artsystems that mediate between a user, expressing his or her information need in natural language, and RDF data. It thus targets all researchers and practitioners working on querying Linked Data, natural language processing for question answering, multilingual information retrieval and related topics. The main goal is to gain insights into the strengths and shortcomings of different approaches and into possible solutions for coping with the large, heterogeneous and distributed nature of Semantic Web data. QALD has a 6-year history of developing a benchmark that is increasingly being used as standard evaluation tool for question answering over Linked Data. Overviews of the past instantiations of the challenge are available from the CLEF Working Notes as well as ESWC proceedings:",
"title": ""
},
{
"docid": "ae151d8ed9b8f99cfe22e593f381dd3b",
"text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.",
"title": ""
},
{
"docid": "000a7813bebebedf0308849ae3a8c237",
"text": "Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption.\n We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.",
"title": ""
},
{
"docid": "73545ef815fb22fa048fed3e0bc2cc8b",
"text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.",
"title": ""
},
{
"docid": "e55166985c781a0d8c503561211703c4",
"text": "It is one of the key issues in the 4-th generation (4G) cellular networks how to efficiently handle the heavy random access (RA) load caused by newly accommodating the huge population of Machine-to-Machine or Machine-Type Communication (M2M or MTC) customers/devices. We consider two major candidate methods for RA preamble allocation and management, which are under consideration for possible adoption in Long Term Evolution (LTE)-Advanced. One method, Method 1, is to completely split the set of available RA preambles into two disjoint subsets: one is for human-to-human (H2H) customers and the other for M2M customers/devices. The other method, Method 2, is also to split the set into two subsets: one is for H2H customers only whereas the other is for both H2H and M2M customers. We model and analyze the throughput performance of two methods. Our results demonstrate that there is a boundary of RA load below which Method 2 performs slightly better than Method 1 but above which Method 2 degrades throughput to a large extent. Our modeling and analysis can be utilized as a guideline to design the RA preamble resource management method.",
"title": ""
},
{
"docid": "d597215d72463a02fb373ae164f1c990",
"text": "Hearing impaired people uses signs to communicate with others. Just like verbally spoken languages, there is no universal language as every country has its own spoken language so every country has their own dialect of sign language and in India they uses Indian Sign Language (ISL). In the last few years, researchers take interest in the automation of ISL. Some attempts have been made in India and other countries. In this study we try to explore and analyze the work have been made with automation of sign language and gesture recognition. We tried to explore the challenges comes in the real time sign recognition system. This review also includes the progress of standard corpus creation of the ISL.",
"title": ""
},
{
"docid": "d077b9863d147739a57871820019bf46",
"text": "In this paper, an asymmetrical fuzzy-logic-control (FLC)-based maximum power point tracking (MPPT) algorithm for photovoltaic (PV) systems is presented. Two membership function (MF) design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V) curve of solar cells under standard test conditions (STC). The second method uses the particle swarm optimization (PSO) technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs) is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O) and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical OPEN ACCESS Energies 2015, 8 5339 FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.",
"title": ""
},
{
"docid": "0dfcb525fe5dd00032e7826a76a290e7",
"text": "In this study, we tried to find a solution for inpainting problem using deep convolutional autoencoders. A new training approach has been proposed as an alternative to the Generative Adversarial Networks. The neural network that designed for inpainting takes an image, which the certain part of its center is extracted, as an input then it attempts to fill the blank region. During the training phase, a distinct deep convolutional neural network is used and it is called Advisor Network. We show that the features extracted from intermediate layers of the Advisor Network, which is trained on a different dataset for classification, improves the performance of the autoencoder.",
"title": ""
},
{
"docid": "7a718827578d63ff9b7187be7e486051",
"text": "In this paper, we propose an adaptive specification-based intrusion detection system (IDS) for detecting malicious unmanned air vehicles (UAVs) in an airborne system in which continuity of operation is of the utmost importance. An IDS audits UAVs in a distributed system to determine if the UAVs are functioning normally or are operating under malicious attacks. We investigate the impact of reckless, random, and opportunistic attacker behaviors (modes which many historical cyber attacks have used) on the effectiveness of our behavior rule-based UAV IDS (BRUIDS) which bases its audit on behavior rules to quickly assess the survivability of the UAV facing malicious attacks. Through a comparative analysis with the multiagent system/ant-colony clustering model, we demonstrate a high detection accuracy of BRUIDS for compliant performance. By adjusting the detection strength, BRUIDS can effectively trade higher false positives for lower false negatives to cope with more sophisticated random and opportunistic attackers to support ultrasafe and secure UAV applications.",
"title": ""
},
{
"docid": "17de3b22497b1475a67ab4fb27231cca",
"text": "Dynamic Coattention Network (DCN) was introduced in late 2016 and achieved state-of-the-art performance on Stanford Question Answering Dataset (SQuAD). In this paper, we re-implement DCN and explore different extensions to DCN, including multi-task learning with Quora question pairs dataset, different loss function that account for distance from truth, variation of sentinel vectors, novel pre-processing trick, modification to coattention encoder architecture, as well as hyperparameter tuning. After joint training, we observe a 2% increase in f1 on Quora dataset. Our conclusion is that multi-task learning benefits the simpler task more than the more complicated task. On CodaLab leaderboard, we achieved Test f1 = 67.282, EM = 56.278.",
"title": ""
},
{
"docid": "4021d84dc14d0d9f365d26087540ce57",
"text": "The ability to verify the integrity of video files is important for consumer and business applications alike. Especially if video files are to be used as evidence in court, the ability to prove that a file existed in a certain state at a specific time and was not altered since is crucial. This paper proposes the use of blockchain technology to secure and verify the integrity of video files. To demonstrate a specific use case for this concept, we present an application that converts a videocamera enabled smartphone into a cost-effective tamperproof dashboard camera (dash cam). If the phone’s built-in sensors detect a collision, the application automatically creates a hash of the relevant video recording. This video file’s hash is immediately transmitted to the OriginStamp service, which includes the hash in a transaction made to the Bitcoin network. Once the Bitcoin network confirms the transaction, the video file’s hash is permanently secured in the tamperproof decentralized public ledger that is the blockchain. Any subsequent attempt to manipulate the video is futile, because the hash of the manipulated footage will not match the hash that was secured in the blockchain. Using this approach, the integrity of video evidence cannot be contested. The footage of dashboard cameras could become a valid form of evidence in court. In the future, the approach could be extended to automatically secure the integrity of digitally recorded data in other scenarios, including: surveillance systems, drone footage, body cameras of law enforcement, log data from industrial machines, measurements recorded by lab equipment, and the activities of weapon systems. We have made the source code of the demonstrated application available under an MIT License and encourage anyone to contribute: www.gipp.com/dtt",
"title": ""
},
{
"docid": "ca873d33aacb15d97c830a60dba6f7a3",
"text": "Internet of Things (IoT) is extension of current internet to provide communication, connection, and inter-networking between various devices or physical objects also known as “Things.” In this paper we have reported an effective use of IoT for Environmental Condition Monitoring and Controlling in Homes. We also provide fault detection and correction in any devices connected to this system automatically. Home Automation is nothing but automation of regular activities inside the home. Now a day's due to huge advancement in wireless sensor network and other computation technologies, it is possible to provide flexible and low cost home automation system. However there is no any system available in market which provide home automation as well as error detection in the devices efficiently. In this system we use prediction to find out the required solution if any problem occurs in any device connected to the system. To achieve that we are applying Data Mining concept. For efficient data mining we use Naive Bayes Classifier algorithm to find out the best possible solution. This gives a huge upper hand on other available home automation system, and we actually manage to provide a real intelligent system.",
"title": ""
},
{
"docid": "25af730c2a44b96e95058942b498dd32",
"text": "We introduce a manually-created, multireference dataset for abstractive sentence and short paragraph compression. First, we examine the impact of singleand multi-sentence level editing operations on human compression quality as found in this corpus. We observe that substitution and rephrasing operations are more meaning preserving than other operations, and that compressing in context improves quality. Second, we systematically explore the correlations between automatic evaluation metrics and human judgments of meaning preservation and grammaticality in the compression task, and analyze the impact of the linguistic units used and precision versus recall measures on the quality of the metrics. Multi-reference evaluation metrics are shown to offer significant advantage over single reference-based metrics.",
"title": ""
},
{
"docid": "adfe33d77ff2432904c78d45122659d5",
"text": "Two important plant pathogenic bacteria Acidovorax oryzae and Acidovorax citrulli are closely related and often not easy to be differentiated from each other, which often resulted in a false identification between them based on traditional methods such as carbon source utilization profile, fatty acid methyl esters, and ELISA detection tests. MALDI-TOF MS and Fourier transform infrared (FTIR) spectra have recently been successfully applied in bacterial identification and classification, which provide an alternate method for differentiating the two species. Characterization and comparison of the 10 A. oryzae strains and 10 A. citrulli strains were performed based on traditional bacteriological methods, MALDI-TOF MS, and FTIR spectroscopy. Our results showed that the identity of the two closely related plant pathogenic bacteria A. oryzae and A. citrulli was able to be confirmed by both pathogenicity tests and species-specific PCR, but the two species were difficult to be differentiated based on Biolog and FAME profile as well as 16 S rRNA sequence analysis. However, there were significant differences in MALDI-TOF MS and FTIR spectra between the two species of Acidovorax. MALDI-TOF MS revealed that 22 and 18 peaks were specific to A. oryzae and A. citrulli, respectively, while FTIR spectra of the two species of Acidovorax have the specific peaks at 1738, 1311, 1128, 1078, 989 cm-1 and at 1337, 968, 933, 916, 786 cm-1, respectively. This study indicated that MALDI-TOF MS and FTIR spectra may give a new strategy for rapid bacterial identification and differentiation of the two closely related species of Acidovorax.",
"title": ""
},
{
"docid": "2247a7972e853221e0e04c9761847c04",
"text": "Recently, as real-time Ethernet based protocols, especially EtherCAT have become more widely used in various fields such as automation systems and motion control, many studies on their design have been conducted. In this paper, we describe a method for the design of an EtherCAT slave module we developed and its application to a closed loop motor drive. Our EtherCAT slave module consists of the ARM Cortex-M3 as the host controller and ET1100 as the EtherCAT slave controller. These were interfaced with a parallel interface instead of the SPI used by many researchers and developers. To measure the performance of this device, 32-axis closed loop step motor drives were used and the experimental results in the test environment are described.",
"title": ""
}
] |
scidocsrr
|
9da8b3061320759d95fe2419f31e617a
|
A survey on named data networking
|
[
{
"docid": "a5abd5f11b83afdccbdfc190b8351b07",
"text": "Named Data Networking (NDN) is a recently proposed general- purpose network architecture that leverages the strengths of Internet architecture while aiming to address its weaknesses. NDN names packets rather than end-hosts, and most of NDN's characteristics are a consequence of this fact. In this paper, we focus on the packet forwarding model of NDN. Each packet has a unique name which is used to make forwarding decisions in the network. NDN forwarding differs substantially from that in IP; namely, NDN forwards based on variable-length names and has a read-write data plane. Designing and evaluating a scalable NDN forwarding node architecture is a major effort within the overall NDN research agenda. In this paper, we present the concepts, issues and principles of scalable NDN forwarding plane design. The essential function of NDN forwarding plane is fast name lookup. By studying the performance of the NDN reference implementation, known as CCNx, and simplifying its forwarding structure, we identify three key issues in the design of a scalable NDN forwarding plane: 1) exact string matching with fast updates, 2) longest prefix matching for variable-length and unbounded names and 3) large- scale flow maintenance. We also present five forwarding plane design principles for achieving 1 Gbps throughput in software implementation and 10 Gbps with hardware acceleration.",
"title": ""
}
] |
[
{
"docid": "7eb9e3aac9d25e3ae0628ffe0beea533",
"text": "Many believe that an essential component for the discovery of the tremendous diversity in natural organisms was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. One hypothesized mechanism for evolvability is developmental canalization, wherein certain dimensions of variation become more likely to be traversed and others are prevented from being explored (e.g., offspring tend to have similar-size legs, and mutations affect the length of both legs, not each leg individually). While ubiquitous in nature, canalization is rarely reported in computational simulations of evolution, which deprives us of in silico examples of canalization to study and raises the question of which conditions give rise to this form of evolvability. Answering this question would shed light on why such evolvability emerged naturally, and it could accelerate engineering efforts to harness evolution to solve important engineering challenges. In this article, we reveal a unique system in which canalization did emerge in computational evolution. We document that genomes entrench certain dimensions of variation that were frequently explored during their evolutionary history. The genetic representation of these organisms also evolved to be more modular and hierarchical than expected by chance, and we show that these organizational properties correlate with increased fitness. Interestingly, the type of computational evolutionary experiment that produced this evolvability was very different from traditional digital evolution in that there was no objective, suggesting that open-ended, divergent evolutionary processes may be necessary for the evolution of evolvability.",
"title": ""
},
{
"docid": "ea94a3c561476e88d5ac2640656a3f92",
"text": "Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "05a5e98ad70d9206f2ef1444500050fe",
"text": "The integration of business processes across organizations is typically beneficial for all involved parties. However, the lack of trust is often a roadblock. Blockchain is an emerging technology for decentralized and transactional data sharing across a network of untrusted participants. It can be used to find agreement about the shared state of collaborating parties without trusting a central authority or any particular participant. Some blockchain networks also provide a computational infrastructure to run autonomous programs called smart contracts. In this paper, we address the fundamental problem of trust in collaborative process execution using blockchain. We develop a technique to integrate blockchain into the choreography of processes in such a way that no central authority is needed, but trust maintained. Our solution comprises the combination of an intricate set of components, which allow monitoring or coordination of business processes. We implemented our solution and demonstrate its feasibility by applying it to three use case processes. Our evaluation includes the creation of more than 500 smart contracts and the execution over 8,000 blockchain transactions.",
"title": ""
},
{
"docid": "d90467d05b4df62adc94b7c150013968",
"text": "Bacterial flagella and type III secretion system (T3SS) are evolutionarily related molecular transport machineries. Flagella mediate bacterial motility; the T3SS delivers virulence effectors to block host defenses. The inflammasome is a cytosolic multi-protein complex that activates caspase-1. Active caspase-1 triggers interleukin-1β (IL-1β)/IL-18 maturation and macrophage pyroptotic death to mount an inflammatory response. Central to the inflammasome is a pattern recognition receptor that activates caspase-1 either directly or through an adapter protein. Studies in the past 10 years have established a NAIP-NLRC4 inflammasome, in which NAIPs are cytosolic receptors for bacterial flagellin and T3SS rod/needle proteins, while NLRC4 acts as an adapter for caspase-1 activation. Given the wide presence of flagella and the T3SS in bacteria, the NAIP-NLRC4 inflammasome plays a critical role in anti-bacteria defenses. Here, we review the discovery of the NAIP-NLRC4 inflammasome and further discuss recent advances related to its biochemical mechanism and biological function as well as its connection to human autoinflammatory disease.",
"title": ""
},
{
"docid": "0de95645a74d401ad0d0d608faaa0d1d",
"text": "This contribution describes the research activity on the development of different smart pixel topologies aimed at three-dimensional (3D) vision applications exploiting the multiple-pulse indirect time-of-flight (TOF) and standard direct TOF techniques. The proposed approaches allow for the realization of scannerless laser ranging systems capable of fast collection of 3D data sets, as required in a growing number of applications like, automotive, security, surveillance and robotic guidance. Single channel approach, as well as matrix-organized sensors, will be described, facing the demanding constraints of specific applications, like the high dynamic range capability and the background immunity. Real time range (3D) and intensity (2D) imaging of non-cooperative targets, also in presence of strong background illumination, has been successfully performed in the 2m-9m range with a precision better than 5% and an accuracy of about 1%.",
"title": ""
},
{
"docid": "597d42e66f8bb9731cd6203b82213222",
"text": "Text classification is the process of classifying documents into predefined categories based on their content. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. We have proposed a Text Classification system for classifying abstract of different research papers. In this System we have extracted keywords using Porter Stemmer and Tokenizer. The word set is formed from the derived keywords using Association Rule and Apriori algorithm. The Probability of the word set is calculated using naive bayes classifier and then the new abstract inserted by the user is classified as belonging to one of the various classes. The accuracy of the system is found satisfactory. It requires less training data as compared to other classification system.",
"title": ""
},
{
"docid": "f06ec75f4835b6eabe50826f075e1fa1",
"text": "In this paper, we propose a robust methodology to assess the value of microblogging data to forecast stock market variables: returns, volatility and trading volume of diverse indices and portfolios. The methodology uses sentiment and attention indicators extracted from microblogs (a large Twitter dataset is adopted) and survey indices (AAII and II, USMC and Sentix), diverse forms to daily aggregate these indicators, usage of a Kalman Filter to merge microblog and survey sources, a realistic rolling windows evaluation, several Machine Learning methods and the Diebold-Mariano test to validate if the sentiment and attention based predictions are valuable when compared with an autoregressive baseline. We found that Twitter sentiment and posting volume were relevant for the forecasting of returns of S&P 500 index, portfolios of lower market capitalization and some industries. Additionally, KF sentiment was informative for the forecasting of returns. Moreover, Twitter and KF sentiment indicators were useful for the prediction of some survey sentiment indicators. These results confirm the usefulness of microblogging data for financial expert systems, allowing to predict stock market behavior and providing a valuable alternative for existing survey measures with advantages (e.g., fast and cheap creation, daily frequency). © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "63f20dd528d54066ed0f189e4c435fe7",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "58b121012d9772285af95520fab7eaa0",
"text": "We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.",
"title": ""
},
{
"docid": "fd64292513423ee695a9cb0f0987a87b",
"text": "Most observer-based methods applied in fault detection and diagnosis (FDD) schemes use the classical twodegrees of freedom observer structure in which a constant matrix is used to stabilize the error dynamics while a post filter helps to achieve some desired properties for the residual signal. In this paper, we consider the use of a more general framework which is the dynamic observer structure in which an observer gain is seen as a filter designed so that the error dynamics has some desirable frequency domain characteristics. This structure offers extra degrees of freedom and we show how it can be used for the sensor faults diagnosis problem achieving detection and estimation at the same time. The use of weightings to transform this problem into a standard H∞ problem is also demonstrated.",
"title": ""
},
{
"docid": "ca1b189815ce5eb56c2b44e2c0c154aa",
"text": "Synthetic data sets can be useful in a variety of situations, including repeatable regression testing and providing realistic - but not real - data to third parties for testing new software. Researchers, engineers, and software developers can test against a safe data set without affecting or even accessing the original data, insulating them from privacy and security concerns as well as letting them generate larger data sets than would be available using only real data. Practitioners use data mining technology to discover patterns in real data sets that aren't apparent at the outset. This article explores how to combine information derived from data mining applications with the descriptive ability of synthetic data generation software. Our goal is to demonstrate that at least some data mining techniques (in particular, a decision tree) can discover patterns that we can then use to inverse map into synthetic data sets. These synthetic data sets can be of any size and will faithfully exhibit the same (decision tree) patterns. Our work builds on two technologies: synthetic data definition language and predictive model markup language.",
"title": ""
},
{
"docid": "ad8a727d0e3bd11cd972373451b90fe7",
"text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.",
"title": ""
},
{
"docid": "d88c13d1c332f943464733cfd5acef67",
"text": "Social media and social networks are embedded in our society to a point that could not have been imagined only ten years ago. Facebook, LinkedIn, and Twitter are already well known social networks that have a large audience in all age groups. The amount of data that those social sites gather from their users is continually increasing and this data is very valuable for marketing, research, and various other purposes. At the same time, this data usually contain a significant amount of sensitive information which should be protected against unauthorized disclosure. To protect the privacy of individuals, this data must be anonymized such that the risk of re-identification of specific individuals is very low. In this paper we study if anonymized social networks preserve existing communities from the original social networks. To perform this study, we introduce two approaches to measure the community preservation between the initial network and its anonymized version. In the first approach we simply count how many nodes from the original communities remained in the same community after the processes of anonymization and de-anonymization. In the second approach we consider the community preservation for each node individually. Specifically, for each node, we compare the original and final communities to which the node belongs. To anonymize social networks we use two models, namely, k-anonymity for social networks and k-degree anonymity. To determine communities in social networks we use an existing community detection algorithm based on modularity quality function. Our experiments on publically available datasets show that anonymized social networks satisfactorily preserve the community structure of their original networks. 56 Alina Campan, Yasmeen Alufaisan, Traian Marius Truta TRANSACTIONS ON DATA PRIVACY 8 (2015)",
"title": ""
},
{
"docid": "b7d1428434a7274b55a00bce2cc0cf4f",
"text": "This paper studies wideband hybrid precoder for downlink space-division multiple-access and orthogonal frequency-division multiple-access (SDMA-OFDMA) massive multi-input multi-output (MIMO) systems. We first derive an iterative algorithm to alternatingly optimize the phase-shifter based wideband analog precoder and low-dimensional digital precoders, then an efficient low-complexity non-iterative hybrid precoder proposes. Simulation results show that in wideband systems the performance of hybrid precoder is affected by the employed frequency-domain scheduling method and the number of available radio frequency (RF) chains, which can perform as well as narrowband hybrid precoder when greedy scheduling is employed and the number of RF chains is large.",
"title": ""
},
{
"docid": "e284ee49cdb78d3a9eec6daab37dd7e4",
"text": "This paper presents the design, simulation, and implementation of band pass filters in rectangular waveguides with radius, having 0.1 dB pass band ripple and 6.3% ripple at the center frequency of 14.2 GHz. A Mician microwave wizard software based on the Mode Matching Method (MMM) was used to simulate the structure of the filter. Simulation results are in good agreement with the measured one which improve the validity of the waveguide band pass filter design method.",
"title": ""
},
{
"docid": "7e047b7c0a0ded44106ce6b50726d092",
"text": "Skeleton-based action recognition task is entangled with complex spatio-temporal variations of skeleton joints, and remains challenging for Recurrent Neural Networks (RNNs). In this work, we propose a temporal-then-spatial recalibration scheme to alleviate such complex variations, resulting in an end-to-end Memory Attention Networks (MANs) which consist of a Temporal Attention Recalibration Module (TARM) and a Spatio-Temporal Convolution Module (STCM). Specifically, the TARM is deployed in a residual learning module that employs a novel attention learning network to recalibrate the temporal attention of frames in a skeleton sequence. The STCM treats the attention calibrated skeleton joint sequences as images and leverages the Convolution Neural Networks (CNNs) to further model the spatial and temporal information of skeleton data. These two modules (TARM and STCM) seamlessly form a single network architecture that can be trained in an end-to-end fashion. MANs significantly boost the performance of skeleton-based action recognition and achieve the best results on four challenging benchmark datasets: NTU RGB+D, HDM05, SYSU-3D and UT-Kinect.1",
"title": ""
},
{
"docid": "b638e384285bbb03bdc71f2eb2b27ff8",
"text": "In this paper, we present two win predictors for the popular online game Dota 2. The first predictor uses full post-match data and the second predictor uses only hero selection data. We will explore and build upon existing work on the topic as well as detail the specifics of both algorithms including data collection, exploratory analysis, feature selection, modeling, and results.",
"title": ""
},
{
"docid": "2b6087cab37980b1363b343eb0f81822",
"text": "We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.",
"title": ""
},
{
"docid": "82e7bdd78261e7339472c7278bff97ca",
"text": "A novel antenna with both horizontal and vertical polarizations is proposed for 1.7-2.1 GHz LTE band small cell base stations. Horizontal polarization is achieved by using the Vivaldi antennas at the main PCB board in azimuth plane, whereas the vertical polarization is obtained using the rectangular monopole with curved corners in proximity of the horizontal elements. A prototype antenna associated with 8-elements (four horizontal and four vertical) is fabricated on the FR4 substrate with the thickness of 0.2 cm and 0.12 cm for Vivaldi and monopole antennas, respectively. Experimental results have validated the design procedure of the antenna with a volume of 14 × 14 × 4.5 cm3 and indicated the realization of the requirements for the small cell base station applications.",
"title": ""
}
] |
scidocsrr
|
7931b953a11baceac3bea563c7fabc10
|
Body Parts Segmentation with Attached Props Using RGB-D Imaging
|
[
{
"docid": "0b8285c090fd6b725b3b04af9195c4fd",
"text": "We present a simple algorithm for computing a high-quality personalized avatar from a single color image and the corresponding depth map which have been captured by Microsoft’s Kinect sensor. Due to the low market price of our hardware setup, 3D face scanning becomes feasible for home use. The proposed algorithm combines the advantages of robust non-rigid registration and fitting of a morphable face model. We obtain a high-quality reconstruction of the facial geometry and texture along with one-to-one correspondences with our generic face model. This representation allows for a wide range of further applications such as facial animation or manipulation. Our algorithm has proven to be very robust. Since it does not require any user interaction, even non-expert users can easily create their own personalized avatars. Copyright # 2011 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "853760e8446ce8d3ebe5cfa3639d69b2",
"text": "In just one decade, social media have revolutionized the life of many people and thus attracted much attention, not only from industry, but also academia. To understand how researchers have adopted theories, used research constructs, and developed conceptual frameworks in their studies, a systematic and structured literature review based on five leading online academic databases was conducted. A total of 46 articles on social media research were consolidated and analyzed, including empirical studies spanning from 2002 to 2011. A collection of theories/models and constructs/attributes adopted in these articles is summarized and tabulated for easy reference and comprehension of extant research results. A causalchain framework was developed based on the input-moderator–mediator-output model to illustrate the causality between the research constructs used and the conceptualization of theoretical models/theories proposed by previous researchers. Because social media cover a wide range of research topics, the literature review may not be exhaustive. However, the proposed causal-chain framework and suggested research directions may be regarded as representative references for future research in the subject area. This is believed to be the first comprehensive literature review of social media research, and it contributes to a better understanding of the causes and effects of the adoption and usage of social media. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1e3e52f584863903625a07aabd1517d3",
"text": "Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset.",
"title": ""
},
{
"docid": "32d0a26f21a25fe1e783b1edcfbcf673",
"text": "Histologic grading has been used as a guide for clinical management in follicular lymphoma (FL). Proliferation index (PI) of FL generally correlates with tumor grade; however, in cases of discordance, it is not clear whether histologic grade or PI correlates with clinical aggressiveness. To objectively evaluate these cases, we determined PI by Ki-67 immunostaining in 142 cases of FL (48 grade 1, 71 grade 2, and 23 grade 3). A total of 24 cases FL with low histologic grade but high PI (LG-HPI) were identified, a frequency of 18%. On histologic examination, LG-HPI FL often exhibited blastoid features. Patients with LG-HPI FL had inferior disease-specific survival but a higher 5-year disease-free rate than low-grade FL with concordantly low PI (LG-LPI). However, transformation to diffuse large B-cell lymphoma was uncommon in LG-HPI cases (1 of 19; 5%) as compared with LG-LPI cases (27 of 74; 36%). In conclusion, LG-HPI FL appears to be a subgroup of FL with clinical behavior more akin to grade 3 FL. We propose that these LG-HPI FL cases should be classified separately from cases of low histologic grade FL with concordantly low PI.",
"title": ""
},
{
"docid": "8b773175bc7c1830958373dd45f56b6c",
"text": "Code-Mixing (CM) is a natural phenomenon observed in many multilingual societies and is becoming the preferred medium of expression and communication in online and social media fora. In spite of this, current Question Answering (QA) systems do not support CM and are only designed to work with a single interaction language. This assumption makes it inconvenient for multi-lingual users to interact naturally with the QA system especially in scenarios where they do not know the right word in the target language. In this paper, we present WebShodh an end-end web-based Factoid QA system for CM languages. We demonstrate our system with two CM language pairs: Hinglish (Matrix language: Hindi, Embedded language: English) and Tenglish (Matrix language: Telugu, Embedded language: English). Lack of language resources such as annotated corpora, POS taggers or parsers for CM languages poses a huge challenge for automated processing and analysis. In view of this resource scarcity, we only assume the existence of bi-lingual dictionaries from the matrix languages to English and use it for lexically translating the question into English. Later, we use this loosely translated question for our downstream analysis such as Answer Type(AType) prediction, answer retrieval and ranking. Evaluation of our system reveals that we achieve an MRR of 0.37 and 0.32 for Hinglish and Tenglish respectively. We hosted this system online and plan to leverage it for collecting more CM questions and answers data for further improvement.",
"title": ""
},
{
"docid": "13091eb3775715269b7bee838f0a6b00",
"text": "Smartphones can now connect to a variety of external sensors over wired and wireless channels. However, ensuring proper device interaction can be burdensome, especially when a single application needs to integrate with a number of sensors using different communication channels and data formats. This paper presents a framework to simplify the interface between a variety of external sensors and consumer Android devices. The framework simplifies both application and driver development with abstractions that separate responsibilities between the user application, sensor framework, and device driver. These abstractions facilitate a componentized framework that allows developers to focus on writing minimal pieces of sensor-specific code enabling an ecosystem of reusable sensor drivers. The paper explores three alternative architectures for application-level drivers to understand trade-offs in performance, device portability, simplicity, and deployment ease. We explore these tradeoffs in the context of four sensing applications designed to support our work in the developing world. They highlight a range of sensor usage models for our application-level driver framework that vary data types, configuration methods, communication channels, and sampling rates to demonstrate the framework's effectiveness.",
"title": ""
},
{
"docid": "afdc57b5d573e2c99c73deeef3c2fd5f",
"text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "131a866cba7a8b2e4f66f2496a80cb41",
"text": "The Python language is highly dynamic, most notably due to late binding. As a consequence, programs using Python typically run an order of magnitude slower than their C counterpart. It is also a high level language whose semantic can be made more static without much change from a user point of view in the case of mathematical applications. In that case, the language provides several vectorization opportunities that are studied in this paper, and evaluated in the context of Pythran, an ahead-of-time compiler that turns Python module into C++ meta-programs.",
"title": ""
},
{
"docid": "2e8d81ba0b09bc657964d20eb17c976c",
"text": "The “Internet of things” (IoT) concept nowadays is one of the hottest trends for research in any given field; since IoT is about interactions between multiple devices, things, and objects. This interaction opens different directions of enhancement and development in many fields, such as architecture, dependencies, communications, protocols, security, applications and big data. The results will be outstanding and we will be able to reach the desired change and improvements we seek in the fields that affect our lives. The critical goal of Internet of things (IoT) is to ensure effective communication between objects and build a sustained bond among them using different types of applications. The application layer is responsible for providing services and determines a set of protocols for message passing at the application level. This survey addresses a set of application layer protocols that are being used today for IoT, to affirm a reliable tie among objects and things.",
"title": ""
},
{
"docid": "905ba98c5d0a3ec39e06e9a14caa9016",
"text": "Dialogue topic tracking is a sequential labelling problem of recognizing the topic state at each time step in given dialogue sequences. This paper presents various artificial neural network models for dialogue topic tracking, including convolutional neural networks to account for semantics at each individual utterance, and recurrent neural networks to account for conversational contexts along multiple turns in the dialogue history. The experimental results demonstrate that our proposed models can significantly improve the tracking performances in human-human conversations.",
"title": ""
},
{
"docid": "7e682f98ee6323cd257fda07504cba20",
"text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods",
"title": ""
},
{
"docid": "cc148db20fdb503bde6ea6a2b05c7534",
"text": "Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance destination marketing strategies, at the same time transforming tourists’ experiences fundamentally. Effective and usable design is still in its infancy. In this paper we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists.",
"title": ""
},
{
"docid": "8b696bedd32e33cef6be49ef8ee40047",
"text": "While the scholarly literature on pornography use is growing, much of this literature has examined pornography use as a static feature that does not change. Despite this trend, pornography use, like most sexual behaviors, is likely best viewed as a dynamic feature that may shift across the developmental life span. Using a sample of 908 adults from the United States, retrospective data on pornography use through adolescence and emerging adulthood were gathered to explore trajectories of pornography use across these developmental periods. Latent mixture models suggested the presence of common patterns of use across both developmental periods. Adolescence patterns appeared to largely be distinguished by those who either engaged or did not engage with pornography, while emerging adulthood data revealed the presence of a group of experimenters who engaged in pornography through adolescence but then decreased use through their 20s. Men were found to be more likely to have consistent profiles of pornography use, while single adults were likely to have delayed entry into pornography use. Associations with adult mental health and pornography use were found, suggesting that early exposure to pornography was related to elevated current pornography use patterns and, to a lesser extent, dysfunctional pornography use. Trajectories also had a weak association with life satisfaction, with individuals reporting trajectories involving consistent pornography use reporting lower life satisfaction after controls.",
"title": ""
},
{
"docid": "5eb5dcf91534f88fc34badee5da2f24e",
"text": "This paper describes three driver options for integrated half-bridge power stage using depletion-mode GaN-on-SiC 0.15μm RF process: an active pull-up driver, a bootstrapped driver, and a modified active pull-up driver. The approaches are evaluated and compared in 5 W, 20 V synchronous Buck converter prototypes operating at 100 MHz switching frequency over a wide range of operating points. Measured efficiency peaks above 91% for the designs using the bootstrap and the modified active pull-up integrated drivers.",
"title": ""
},
{
"docid": "b19c8dab4c214b8afbc232b91ab35b25",
"text": "BACKGROUND\nMobile health (mHealth) apps for weight loss (weight loss apps) can be useful diet and exercise tools for individuals in need of losing weight. Most studies view weight loss app users as these types of individuals, but not all users have the same needs. In fact, users with disordered eating behaviors who desire to be underweight are also utilizing weight loss apps; however, few studies give a sense of the prevalence of these users in weight loss app communities and their perceptions of weight loss apps in relation to disordered eating behaviors.\n\n\nOBJECTIVE\nThe aim of this study was to provide an analysis of users' body mass indices (BMIs) in a weight loss app community and examples of how users with underweight BMI goals perceive the impact of the app on disordered eating behaviors.\n\n\nMETHODS\nWe focused on two aspects of a weight loss app (DropPounds): profile data and forum posts, and we moved from a broader picture of the community to a narrower focus on users' perceptions. We analyzed profile data to better understand the goal BMIs of all users, highlighting the prevalence of users with underweight BMI goals. Then we explored how users with a desire to be underweight discussed the weight loss app's impact on disordered eating behaviors.\n\n\nRESULTS\nWe found three main results: (1) no user (regardless of start BMI) starts with a weight gain goal, and most users want to lose weight; (2) 6.78% (1261/18,601) of the community want to be underweight, and most identify as female; (3) users with underweight BMI goals tend to view the app as positive, especially for reducing bingeing; however, some acknowledge its role in exacerbating disordered eating behaviors.\n\n\nCONCLUSIONS\nThese findings are important for our understanding of the different types of users who utilize weight loss apps, the perceptions of weight loss apps related to disordered eating, and how weight loss apps may impact users with a desire to be underweight. Whereas these users had underweight goals, they often view the app as helpful in reducing disordered eating behaviors, which led to additional questions. Therefore, future research is needed.",
"title": ""
},
{
"docid": "dc72319e7ce6d1ae14cf91598ad5a1a3",
"text": "The life cycle of dinoflagellates of the genus Alexandrium includes sexual reproduction followed by the formation of a dormant hypnozygote cyst, which serves as a resting stage. Negatively buoyant cysts purportedly fall to the benthos where they undergo a mandatory period of quiescence. Previous reports of cysts in the surficial sediments of the Gulf of Maine, where Alexandrium blooms are well documented, show a broad distribution of cysts, with highest concentrations generally in sediments below 100m depth. We report here an exploration of cysts suspended in the water column, where they would be better positioned to inoculate springtime Alexandrium populations. During cruises in February, April, and June of 2000, water samples were collected at depths just off the bottom (within 5m), at the top of the bottom nepheloid layer, and near the surface (1m) and examined for cyst concentrations. Suspended cysts were found throughout the Gulf of Maine and westernmost Bay of Fundy. Planktonic cyst densities were generally greater in near-bottom and top of the bottom nepheloid layer samples than in near-surface water samples; densities were of the order of 10 cystsm 3 in surface waters, and 10–10 cystsm 3 at near-bottom depths. Temporally, they were most abundant in February and least abundant in April. Reports by earlier workers of cysts in the underlying sediments were on the order of 10 cysts cm . We present calculations that demonstrate the likelihood of cyst resuspension from bottom sediments forced by swell and tidal currents, and propose that such resuspended cysts are important in inoculating the seasonal bloom. We estimate that suspended cysts may contribute significantly to the annual vegetative cell population in the Gulf of Maine. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "488e0161ee2a95c1c4082fc6981ae414",
"text": "Information networks that can be extracted from many domains are widely studied recently. Different functions for mining these networks are proposed and developed, such as ranking, community detection, and link prediction. Most existing network studies are on homogeneous networks, where nodes and links are assumed from one single type. In reality, however, heterogeneous information networks can better model the real-world systems, which are typically semi-structured and typed, following a network schema. In order to mine these heterogeneous information networks directly, we propose to explore the meta structure of the information network, i.e., the network schema. The concepts of meta-paths are proposed to systematically capture numerous semantic relationships across multiple types of objects, which are defined as a path over the graph of network schema. Meta-paths can provide guidance for search and mining of the network and help analyze and understand the semantic meaning of the objects and relations in the network. Under this framework, similarity search and other mining tasks such as relationship prediction and clustering can be addressed by systematic exploration of the network meta structure. Moreover, with user’s guidance or feedback, we can select the best meta-path or their weighted combination for a specific mining task.",
"title": ""
},
{
"docid": "3a50df4f64df3c65fbac1727ebe7725a",
"text": "Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.",
"title": ""
},
{
"docid": "7c6d2ede54f0445e852b8f9da95fca32",
"text": "In this paper we apply Conformal Prediction (CP) to the k -Nearest Neighbours Regression (k -NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k -Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.",
"title": ""
},
{
"docid": "8c710f24ed7f940c604388bd4109f8e2",
"text": "In one of the most frequent empirical scenarios in applied linguistics, a researcher's empirical results can be summarized in a two-dimensional table, in which − the rows list the levels of a nominal/categorical variable; − the columns list the levels of another nominal/categorical variable; − the cells in the table defined by these row and column levels provide the frequencies with which combinations of row and column levels were observed in some data set. An example of data from a study of disfluencies in speech is shown in Table 1, which shows the parts of speech of 335 words following three types of disfluencies. Both the part of speech and the disfluency markers represent categorical variables. Noun Verb Conjunction Totals uh 30 70 90 190 uhm 50 20 40 110 silence 20 5 10 35 Totals 100 95 140 335 Table 1 shows that 30 uh's were followed by a noun, 20 uhm's were followed by a verb, etc. One question a researcher may be interested in exploring is whether there is a correlation between the kind of disfluency produced – the variable in the rows – and the part of speech of the word following the disfluency – the variable in the columns. An exploratory glance at the data suggests that uh mostly precedes conjunctions while silences most precede nouns, but an actual statistical test is required to determine (i) whether the distribution of the parts of speech after the disfluencies is in fact significantly different from chance and (ii) what preferences and dispreferences this data set reflects. The most frequent statistical test to analyze two-dimensional frequency tables such as Table 1 is the chi-square test for independence [A] The chi-square test for independence",
"title": ""
}
] |
scidocsrr
|
7471ece8d427afee69bb9d5136a08998
|
Catastrophic Importance of Catastrophic Forgetting
|
[
{
"docid": "66af5a1cd491d6039a6b0b7b2d01b461",
"text": "For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised “practice” phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques.",
"title": ""
}
] |
[
{
"docid": "2445f9a80dc0f31ea39ade0ae8941f26",
"text": "Various groups of ascertainable individuals have been granted the status of “persons” under American law, while that status has been denied to other groups This article examines various analogies that might be drawn by courts in deciding whether to extend “person” status to intelligent machines, and the limitations that might be placed upon such recognition As an alternative analysis: this article questions the legal status of various human/machine interfaces, and notes the difficulty in establishing an absolute point beyond which legal recognition will not extend COMPUTERS INCREASINGLY RESEMBLE their human creators More precisely, it is becoming increasingly difficult to distinguish some computer information-processing from that of humans, judging from the final product. Computers have proven capable of far more physical and mental “human” functions than most people believed was possible. The increasing similarity between humans and machines might eventually require legal recognition of computers as “persons.” In the United States, there are two triers t’o such Views expressed here are those of the author @ Llarshal S. Willick 1982 41 rights reserved Editor’s Note: This article is written by an attorney using a common reference style for legal citations The system of citation is more complex than systems ordinarily used in scientific publications since it must provide numerous variations for different sources of evidence and jurisdictions We have decided not to change t.his article’s format for citations. legal recognition. The first tier determines which ascertainable individuals are considered persons (e g., blacks, yes; fetuses, no.) The second tier determines which rights and obligations are vested in the recognized persons, based on their observed or presumed capacities (e.g., the insane are restricted; eighteen-year-olds can vote.) The legal system is more evolutionary than revolutionary, however. Changes in which individuals should be recognized as persons under the law tend to be in response to changing cult,ural and economic realities, rather than the result of advance planning. Similarly, shifts in the allocation of legal rights and obligations are usually the result of societal pressures that do not result from a dispassionate masterplanning of society. Courts attempt to analogize new problems to those previously settled, where possible: the process is necessarily haphazard. As “intelligent” machines appear, t,hey will pervade a society in which computers play an increasingly significant part, but in which they will have no recognized legal personality. The question of what rights they should have will most probably not have been addressed. It is therefore most likely that computers will enter the legal arena through the courts The myriad acts of countless individuals will eventually give rise to a situat,ion in which some judicial decision regarding computer personality is needed in order to determine the rights of the parties to a THE AI MAGAZINE Summer 1983 5 AI Magazine Volume 4 Number 2 (1983) (© AAAI)",
"title": ""
},
{
"docid": "30e4a666e5b656c9256cf615b9469b32",
"text": "Recent advances in reconstruction and tracking technologies have allowed for easier live capture of humans as animated and realistic three-dimensional avatars. As such, traditional 2D video-based teleconference systems are evolving into immersive 3D-based ones, e.g., with teleported avatars situated in an interactive and shared augmented or virtual space. In such developments, one central concern, and a continued goal, is how to improve the teleconference experience and collaboration through the sense of co-presence and trust as felt by the participating users. In this study, we experimentally investigated the effects of the forms of the teleported 3D avatar (realistically reconstructed vs. character-like) and the environment background (realistic video vs. 3D VR) on the sense of co-presence and the level of trust. Our study shows that the participants generally exhibited a higher sense of co-presence when situated with a real environment background (realistic video) and greater confidence/trust when interacting with a reconstructed realistic looking avatar. These results can help the design of more effective collaborative teleconference and telepresence systems, according to their specific goals.",
"title": ""
},
{
"docid": "a62a23df11fd72522a3d9726b60d4497",
"text": "In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.",
"title": ""
},
{
"docid": "227aa2478076daccec9291be190f7eed",
"text": "In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different sub-regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.",
"title": ""
},
{
"docid": "4073da56cc874ea71f5e8f9c1c376cf8",
"text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.",
"title": ""
},
{
"docid": "0998097311e16ad38e2404435a778dcb",
"text": "Civilian Global Positioning System (GPS) receivers are vulnerable to a number of different attacks such as blocking, jamming, and spoofing. The goal of such attacks is either to prevent a position lock (blocking and jamming), or to feed the receiver false information so that it computes an erroneous time or location (spoofing). GPS receivers are generally aware of when blocking or jamming is occurring because they have a loss of signal. Spoofing, however, is a surreptitious attack. Currently, no countermeasures are in use for detecting spoofing attacks. We believe, however, that it is possible to implement simple, low-cost countermeasures that can be retrofitted onto existing GPS receivers. This would, at the very least, greatly complicate spoofing attacks. Introduction: The civilian Global Positioning System (GPS) is widely used by both government and private industry for many important applications. Some of these applications include public safety services such as police, fire, rescue and ambulance. The cargo industry, buses, taxis, railcars, delivery vehicles, agricultural harvesters, private automobiles, spacecraft, marine and airborne traffic also use GPS systems for navigation. In fact, the Federal Aviation Administration (FAA) is in the process of drafting an instruction requiring that all radio navigation systems aboard aircraft use GPS [1]. Additional uses include hiking and surveying, as well as being used in robotics, cell phones, animal tracking and even GPS wristwatches. Utility companies and telecommunication companies use GPS timing signals to regulate the base frequency of their distribution grids. GPS timing signals are also used by the financial industry, the broadcast industry, mobile telecommunication providers, the international financial industry, banking (for money transfers and time locks), and other distributed computer network applications [2,3]. In short, anyone who wants to know their exact location, velocity, or time might find GPS useful. Unfortunately, the civilian GPS signals are not secure [1]. Only the military GPS signals are encrypted (authenticated), but these are generally unavailable to civilians, foreign governments, and most of the U.S. government, including most of the Department of Defense (DoD). Plans are underway to upgrade the existing GPS system, but they apparently do not include adding encryption or authentication to the civilian GPS signal [4,5]. The GPS signal strength measured at the surface of the Earth is about –160dBw (1x10-16 Watts), which is roughly equivalent to viewing a 25-Watt light bulb from a distance of 10,000 miles. This weak signal can be easily blocked by destroying or shielding the GPS receiver’s antenna. The GPS signal can also be effectively jammed by a signal of a similar frequency, but greater strength. Blocking and jamming, however, are not the greatest security risk, because the GPS receiver will be fully aware it is not receiving the GPS signals needed to determine position and time. A more pernicious attack involves feeding the GPS receiver fake GPS signals so that it believes it is located somewhere in space and time that it is not. This “spoofing” attack is more elegant than jamming because it is surreptitious. The Vulnerability Assessment Team (VAT) at Los Alamos National Laboratory (LANL) has recently demonstrated the ease with which civilian GPS spoofing attacks can be implemented [6]. This spoofing is most easily accomplished by using a GPS satellite simulator. Such GPS satellite simulators are uncontrolled, and widely available. To conduct the spoofing attack, an adversary broadcasts a fake GPS signal with a higher signal strength than the true GPS signal. The GPS receiver believes that the fake signal is actually the true GPS signal from space, and ignores the true GPS signal. The receiver then proceeds to calculate erroneous position or time information based on this false signal. How Does GPS work? The GPS is operated by DoD. It consists of a constellation of 27 satellites (24 active and 3 standby) in 6 separate orbits and reached full official operational capability status on July 17, 1995 [7]. GPS users have the ability to obtain a 3-D position, velocity and time fix in all types of weather, 24-hours a day. GPS users can locate their position to within ± 18 ft on average or ± 60-90 ft for a worst case 3-D fix [8]. Each GPS satellite broadcasts two signals, a civilian unencrypted signal and a military encrypted signal. The civilian GPS signal was never intended for critical or security applications, though that is, unfortunately, how it is now often used. The DoD reserves the military encrypted GPS signal for sensitive applications such as smart weapons. This paper will be focusing on the civilian (unencrypted) GPS signal. Any discussion of civilian GPS vulnerabilities are fully unclassified [9]. The carrier wave for the civilian signal is the same frequency (1575.2 MHz) for all of the GPS satellites. The C/A code provides the GPS receiver on the Earth’s surface with a unique identification number (a.k.a. PRN or Pseudo Random Noise code). In this manner, each satellite transmits a unique identification number that allows the GPS receiver to know which satellites it is receiving signals from. The Nav/System data provides the GPS receiver with information about the position of all the satellites in the constellation as well as precise timing data from the atomic clocks aboard the satellites. L1 Carrier 1575.2 MHz",
"title": ""
},
{
"docid": "6e5e6b361d113fa68b2ca152fbf5b194",
"text": "Spectral learning algorithms have recently become popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we propose four fast and scalable spectral algorithms for learning word embeddings – low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. All the proposed algorithms harness the multi-view nature of text data i.e. the left and right context of each word, are fast to train and have strong theoretical properties. Some of the variants also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords showing that simple linear approaches give performance comparable to or superior than the state-of-the-art non-linear deep learning based methods.",
"title": ""
},
{
"docid": "71e3edd776e500b7f22db1efeec1492c",
"text": "Existing antenna and array systems for 60-GHz wireless personal area network communications suffer from inherent poor radiation at grazing angles. This limitation is overcome in this work with a highly integrated antenna module that combines both broadside and end-fire radiators in a single multilayer organic package. Liquid crystal polymer and Rogers RO3003 are used to implement a small form factor (12.5 mm × 10 mm × 1.3 mm) antenna architecture. The co-designed broadside and end-fire antennas are characterized and measured for operation in the 57-66-GHz frequency range. Measured boresight gains of 8.7 and 10.9 dBi are achieved for the broadside and end-fire antennas while maintaining 35-45-dB isolation between both antennas. The numerically estimated radiation efficiency is found to be 92.5% and 78.5% for the broadside and end-fire elements. These antennas are orthogonally polarized and suitable for frequency reuse. Integrated circuits are mounted inside recessed cavities to realize a fully active antenna module with beam switching or simultaneous radiation. To the best of our knowledge, this is the first publication of a single package multilayer integration of millimeter-wave active antennas with both azimuth and elevation coverage.",
"title": ""
},
{
"docid": "e702ce3922c5b0efff89d59782d1f4da",
"text": "BACKGROUND\nDeep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific \"handcrafted\" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial.\n\n\nAIMS\nThis paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches.\n\n\nRESULTS\nSpecifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images).\n\n\nCONCLUSION\nThis paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data.",
"title": ""
},
{
"docid": "95050a66393b41978cf136c1c99b1922",
"text": "In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent context-aware assistance based on the cognitive indoor navigation knowledge model. We conducted field tests and evaluated the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that context-awareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.",
"title": ""
},
{
"docid": "73128099f3ddd19e4f88d10cdafbd506",
"text": "BACKGROUND\nRecently, there has been an increased interest in the effects of essential oils on athletic performances and other physiological effects. This study aimed to assess the effects of Citrus sinensis flower and Mentha spicata leaves essential oils inhalation in two different groups of athlete male students on their exercise performance and lung function.\n\n\nMETHODS\nTwenty physical education students volunteered to participate in the study. The subjects were randomly assigned into two groups: Mentha spicata and Citrus sinensis (ten participants each). One group was nebulized by Citrus sinensis flower oil and the other by Mentha spicata leaves oil in a concentration of (0.02 ml/kg of body mass) which was mixed with 2 ml of normal saline for 5 min before a 1500 m running tests. Lung function tests were measured using a spirometer for each student pre and post nebulization giving the same running distance pre and post oils inhalation.\n\n\nRESULTS\nA lung function tests showed an improvement on the lung status for the students after inhaling of the oils. Interestingly, there was a significant increase in Forced Expiratory Volume in the first second and Forced Vital Capacity after inhalation for the both oils. Moreover significant reductions in the means of the running time were observed among these two groups. The normal spirometry results were 50 %, while after inhalation with M. spicata oil the ratio were 60 %.\n\n\nCONCLUSION\nOur findings support the effectiveness of M. spicata and C. sinensis essential oils on the exercise performance and respiratory function parameters. However, our conclusion and generalisability of our results should be interpreted with caution due to small sample size and lack of control groups, randomization or masking. We recommend further investigations to explain the mechanism of actions for these two essential oils on exercise performance and respiratory parameters.\n\n\nTRIAL REGISTRATION\nISRCTN10133422, Registered: May 3, 2016.",
"title": ""
},
{
"docid": "ef640dfcbed4b93413b03cd5c2ec3859",
"text": "MaxStream is a federated stream processing system that seamlessly integrates multiple autonomous and heterogeneous Stream Processing Engines (SPEs) and databases. In this paper, we propose to demonstrate the key features of MaxStream using two application scenarios, namely the Sales Map & Spikes business monitoring scenario and the Linear Road Benchmark, each with a different set of requirements. More specifically, we will show how the MaxStream Federator can translate and forward the application queries to two different commercial SPEs (Coral8 and StreamBase), as well as how it does so under various persistency requirements.",
"title": ""
},
{
"docid": "16d52c166a96c5d0d40479530cf52d2b",
"text": "The dorsolateral prefrontal cortex (DLPFC) plays a crucial role in working memory. Notably, persistent activity in the DLPFC is often observed during the retention interval of delayed response tasks. The code carried by the persistent activity remains unclear, however. We critically evaluate how well recent findings from functional magnetic resonance imaging studies are compatible with current models of the role of the DLFPC in working memory. These new findings suggest that the DLPFC aids in the maintenance of information by directing attention to internal representations of sensory stimuli and motor plans that are stored in more posterior regions.",
"title": ""
},
{
"docid": "88478e315049f2c155bb611d797e8eb1",
"text": "In this paper we analyze aspects of the intellectual property strategies of firms in the global cosmetics and toilet preparations industry. Using detailed data on all 4,205 EPO patent grants in the relevant IPC class between 1980 and 2001, we find that about 15 percent of all patents are challenged in EPO opposition proceedings, a rate about twice as high as in the overall population of EPO patents. Moreover, opposition in this sector is more frequent than in chemicals-based high technology industries such as biotechnology and pharmaceuticals. About one third of the opposition cases involve multiple opponents. We search for rationales that could explain this surprisingly strong “IP litigation” activity. In a first step, we use simple probability models to analyze the likelihood of opposition as a function of characteristics of the attacked patent. We then introduce owner firm variables and find that major differences across firms in the likelihood of having their patents opposed prevail even after accounting for other influences. Aggressive opposition in the past appears to be associated with a reduction of attacks on own patents. In future work we will look at the determinants of outcomes and duration of these oppositions, in an attempt to understand the firms’ strategies more fully. Acknowledgements This version of the paper was prepared for presentation at the Productivity Program meetingsof the NBER Summer Institute. An earlier version of the paper was presented in February 2002 at the University of Maastricht Workshop on Strategic Management, Innovation and Econometrics, held at Chateau St. Gerlach, Valkenburg. We would like to thank the participants and in particular Franz Palm and John Hagedoorn for their helpful comments.",
"title": ""
},
{
"docid": "32191afa8dea2376194dfde584bdbb57",
"text": "The billions of public photos on online social media sites contain a vast amount of latent visual information about the world. In this paper, we study the feasibility of observing the state of the natural world by recognizing specific types of scenes and objects in large-scale social image collections. More specifically, we study whether we can recreate satellite maps of snowfall by automatically recognizing snowy scenes in geo-tagged, time stamped images from Flickr. Snow recognition turns out to be a surprisingly doff cult and under-studied problem, so we test a variety of modern scene recognition techniques on this problem and introduce a large-scale, realistic dataset of images with ground truth annotations. As an additional proof-of-concept, we test the ability of recognition algorithms to detect a particular species of flower, the California Poppy, which could be used to give biologists a new source of data on its geospatial distribution over time.",
"title": ""
},
{
"docid": "a4e79170fd4914e993b5613918aa9d47",
"text": "In an effort to expand research on curiosity, we elaborate on a theoretical model that informs research on the design of a new measure and the nomological network of curiosity. Curiosity was conceptualized as a positive emotional-motivational system associated with the recognition, pursuit, and self-regulation of novelty and challenge. Using 5 independent samples, we developed the Curiosity and Exploration Inventory (CEI) comprising 2 dimensions: exploration (appetitive strivings for novelty and challenge) and absorption (full engagement in specific activities). The CEI has good psychometric properties, is relatively unaffected by socially desirable responding, is relatively independent from positive affect, and has a nomological network consistent with our theoretical framework. Predicated on our personal growth facilitation model, we discuss the potential role of curiosity in advancing understanding of various psychological phenomena.",
"title": ""
},
{
"docid": "fee574207e3985ea3c697f831069fa8b",
"text": "This paper focuses on the utilization of wireless networkin g in the robotics domain. Many researchers have already equipped their robot s with wireless communication capabilities, stimulated by the observation that multi-robot systems tend to have several advantages over their single-robot counterpa r s. Typically, this integration of wireless communication is tackled in a quite pragmat ic manner, only a few authors presented novel Robotic Ad Hoc Network (RANET) prot oc ls that were designed specifically with robotic use cases in mind. This is in harp contrast with the domain of vehicular ad hoc networks (VANET). This observati on is the starting point of this paper. If the results of previous efforts focusing on VANET protocols could be reused in the RANET domain, this could lead to rapid progre ss in the field of networked robots. To investigate this possibility, this paper rovides a thorough overview of the related work in the domain of robotic and vehicular ad h oc networks. Based on this information, an exhaustive list of requirements is d efined for both types. It is concluded that the most significant difference lies in the fact that VANET protocols are oriented towards low throughput messaging, while R ANET protocols have to support high throughput media streaming as well. Althoug h not always with equal importance, all other defined requirements are valid for bot h protocols. This leads to the conclusion that cross-fertilization between them is an appealing approach for future RANET research. To support such developments, this pap er concludes with the definition of an appropriate working plan.",
"title": ""
},
{
"docid": "f107ba1eef32a7d1c7b4c6f56470f05e",
"text": "Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.",
"title": ""
},
{
"docid": "67de262056e303b8c180713aab9dca78",
"text": "A frame object detection problem consists of two problems: one is a regression problem to spatially separated bounding boxes, the second is the associated classification of the objects within realtime frame rate. It is widely used in the embedded systems, such as robotics, autonomous driving, security, and drones - all of which require high-performance and low-power consumption. This paper implements the YOLO (You only look once) object detector on an FPGA, which is faster and has a higher accuracy. It is based on the convolutional deep neural network (CNN), and it is a dominant part both the performance and the area. However, the object detector based on the CNN consists of a bounding box prediction (regression) and a class estimation (classification). Thus, the conventional all binarized CNN fails to recognize in most cases. In the paper, we propose a lightweight YOLOv2, which consists of the binarized CNN for a feature extraction and the parallel support vector regression (SVR) for both a classification and a localization. To our knowledge, this is the first time binarized CNN»s have been successfully used in object detection. We implement a pipelined based architecture for the lightweight YOLOv2 on the Xilinx Inc. zcu102 board, which has the Xilinx Inc. Zynq Ultrascale+ MPSoC. The implemented object detector archived 40.81 frames per second (FPS). Compared with the ARM Cortex-A57, it was 177.4 times faster, it dissipated 1.1 times more power, and its performance per power efficiency was 158.9 times better. Also, compared with the nVidia Pascall embedded GPU, it was 27.5 times faster, it dissipated 1.5 times lower power, and its performance per power efficiency was 42.9 times better. Thus, our method is suitable for the frame object detector for an embedded vision system.",
"title": ""
}
] |
scidocsrr
|
829516b1c9d1d49b8de6af3996e142a8
|
Compact Broadband Circularly Polarized Antenna With Parasitic Patches
|
[
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
},
{
"docid": "71034fd57c81f5787eb1642e24b44b82",
"text": "A novel dual-band microstrip antenna with omnidirectional circularly polarized (CP) and unidirectional CP characteristic for each band is proposed in this communication. Function of dual-band dual-mode is realized based on loading with metamaterial structure. Since the fields of the fundamental modes are most concentrated on the fringe of the radiating patch, modifying the geometry of the radiating patch has little effect on the radiation patterns of the two modes (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0, + 1$</tex></formula> mode). CP property for the omnidirectional zeroth-order resonance (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0$</tex> </formula> mode) is achieved by employing curved branches in the radiating patch. Then a 45<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex> </formula> inclined rectangular slot is etched in the center of the radiating patch to excite the CP property for the <formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = + 1$</tex></formula> mode. A prototype is fabricated to verify the properties of the antenna. Both simulation and measurement results illustrate that this single-feed antenna is valuable in wireless communication for its low-profile, radiation pattern selectivity and CP characteristic.",
"title": ""
},
{
"docid": "66382b88e0faa573251d5039ccd65d6c",
"text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.",
"title": ""
},
{
"docid": "12ce2eef03ace3a51177a35473f935be",
"text": "In this letter, a novel slot-coupling feeding technique has been adopted to realize a circularly polarized (CP) 2 × 2 microstrip array. Each array element is fed through two microstrip lines that are excited 90° out of phase (dual-feed technique) and coupled to a square patch by means of a square-ring slot realized in the feeding network ground plane. Design procedure, simulation results, and measurement data are presented for a 2 × 2 array working in the WiMax 3.3-3.8 GHz frequency band (14% percentage bandwidth). Due to both the symmetry properties of the novel slot-coupling feeding configuration and the implementation of a sequential rotation technique, excellent axial ratio (AR) performance is achieved in the WiMax band (AR < 1.35 dB at broadside) and for any direction in the antenna main beam (AR < 2.25 dB at 3.55 GHz). Actually, the 3-dB AR bandwidth is larger than the WiMax frequency band, as it goes up to about 30%.",
"title": ""
}
] |
[
{
"docid": "7e78dbc7ae4fd9a2adbf7778db634b33",
"text": "Dynamic Proof of Storage (PoS) is a useful cryptographic primitive that enables a user to check the integrity of outsourced files and to efficiently update the files in a cloud server. Although researchers have proposed many dynamic PoS schemes in singleuser environments, the problem in multi-user environments has not been investigated sufficiently. A practical multi-user cloud storage system needs the secure client-side cross-user deduplication technique, which allows a user to skip the uploading process and obtain the ownership of the files immediately, when other owners of the same files have uploaded them to the cloud server. To the best of our knowledge, none of the existing dynamic PoSs can support this technique. In this paper, we introduce the concept of deduplicatable dynamic proof of storage and propose an efficient construction called DeyPoS, to achieve dynamic PoS and secure cross-user deduplication, simultaneously. Considering the challenges of structure diversity and private tag generation, we exploit a novel tool called Homomorphic Authenticated Tree (HAT). We prove the security of our construction, and the theoretical analysis and experimental results show that our construction is efficient in practice.",
"title": ""
},
{
"docid": "24e10d8e12d8b3c618f88f1f0d33985d",
"text": "W -algebras of finite type are certain finitely generated associative algebras closely related to universal enveloping algebras of semisimple Lie algebras. In this paper we prove a conjecture of Premet that gives an almost complete classification of finite dimensional irreducible modules for W -algebras. Also we get some partial results towards a conjecture by Ginzburg on their finite dimensional bimodules.",
"title": ""
},
{
"docid": "d42cba123245ef4e07351c4983b90225",
"text": "Deduplication technologies are increasingly being deployed to reduce cost and increase space-efficiency in corporate data centers. However, prior research has not applied deduplication techniques inline to the request path for latency sensitive, primary workloads. This is primarily due to the extra latency these techniques introduce. Inherently, deduplicating data on disk causes fragmentation that increases seeks for subsequent sequential reads of the same data, thus, increasing latency. In addition, deduplicating data requires extra disk IOs to access on-disk deduplication metadata. In this paper, we propose an inline deduplication solution, iDedup, for primary workloads, while minimizing extra IOs and seeks. Our algorithm is based on two key insights from realworld workloads: i) spatial locality exists in duplicated primary data; and ii) temporal locality exists in the access patterns of duplicated data. Using the first insight, we selectively deduplicate only sequences of disk blocks. This reduces fragmentation and amortizes the seeks caused by deduplication. The second insight allows us to replace the expensive, on-disk, deduplication metadata with a smaller, in-memory cache. These techniques enable us to tradeoff capacity savings for performance, as demonstrated in our evaluation with real-world workloads. Our evaluation shows that iDedup achieves 60-70% of the maximum deduplication with less than a 5% CPU overhead and a 2-4% latency impact.",
"title": ""
},
{
"docid": "9ce2aaa0ad3bfe383099782c46746819",
"text": "To achieve high production of rosmarinic acid and derivatives in Escherichia coli which are important phenolic acids found in plants, and display diverse biological activities. The synthesis of rosmarinic acid was achieved by feeding caffeic acid and constructing an artificial pathway for 3,4-dihydroxyphenyllactic acid. Genes encoding the following enzymes: rosmarinic acid synthase from Coleus blumei, 4-coumarate: CoA ligase from Arabidopsis thaliana, 4-hydroxyphenyllactate 3-hydroxylase from E. coli and d-lactate dehydrogenase from Lactobacillus pentosus, were overexpressed in an l-tyrosine over-producing E. coli strain. The yield of rosmarinic acid reached ~130 mg l−1 in the recombinant strain. In addition, a new intermediate, caffeoyl-phenyllactate (~55 mg l−1), was also produced by the engineered E. coli strain. This work not only leads to high yield production of rosmarinic acid and analogues, but also sheds new light on the construction of the pathway of rosmarinic acid in E. coli.",
"title": ""
},
{
"docid": "9ed8b7cb37ea1738c83b5d57e3e35d2d",
"text": "Agent-based technologies are rapidly growing as a powerful tool for modelling and developing largescale distributed systems. Recently, multi-agent systems are largely used for intelligent transportation systems modelling. Traffic signals control is a challenging issue in this area, especially in a large-scale urban network. In a large traffic network, where each agent represents a traffic signals controller, there are many entities interacting with each other and hence it is a complex system. An approach to reduce the complexity of such systems is using organisation-based multi-agent system. In this paper, we use an organisation called holonic multi-agent system (HMAS) to model a large traffic network. A traffic network containing fifty intersections is partitioned into a number of regions and holons are assigned to control each region. The holons are hierarchically arranged in two levels, intersection controller holons in the first level and region controller holons in the second level. We introduce holonic Q-learning to control the signals in both levels. The inter-level interactions between the holons in the two levels contribute to the learning process. Experimental results show that the holonic Q-learning prevents the network to be over-saturated while it causes less average delay time and higher flow rate. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d930e323cb7563edce3f7724be98b822",
"text": "Identity spoofing is a contender for high-security face recognition applications. With the advent of social media and globalized search, our face images and videos are wide-spread on the internet and can be potentially used to attack biometric systems without previous user consent. Yet, research to counter these threats is just on its infancy we lack public standard databases, protocols to measure spoofing vulnerability and baseline methods to detect these attacks. The contributions of this work to the area are three-fold: firstly we introduce a publicly available PHOTO-ATTACK database with associated protocols to measure the effectiveness of counter-measures. Based on the data available, we conduct a study on current state-of-the-art spoofing detection algorithms based on motion analysis, showing they fail under the light of these new dataset. By last, we propose a new technique of countermeasure solely based on foreground/background motion correlation using Optical Flow that outperforms all other algorithms achieving nearly perfect scoring with an equal-error rate of 1.52% on the available test data. The source code leading to the reported results is made available for the replicability of findings in this article.",
"title": ""
},
{
"docid": "8685e00d94d2362a5d6cfab51b61ed99",
"text": "In the late 1980s and early 1990s, object-oriented programming revolutionized software development, popularizing the approach of building of applications as collections of modular components. Today we are seeing a similar revolution in distributed system development, with the increasing popularity of microservice architectures built from containerized software components. Containers [15] [22] [1] [2] are particularly well-suited as the fundamental “object” in distributed systems by virtue of the walls they erect at the container boundary. As this architectural style matures, we are seeing the emergence of design patterns, much as we did for objectoriented programs, and for the same reason – thinking in terms of objects (or containers) abstracts away the lowlevel details of code, eventually revealing higher-level patterns that are common to a variety of applications and algorithms. This paper describes three types of design patterns that we have observed emerging in container-based distributed systems: single-container patterns for container management, single-node patterns of closely cooperating containers, and multi-node patterns for distributed algorithms. Like object-oriented patterns before them, these patterns for distributed computation encode best practices, simplify development, and make the systems where they are used more reliable.",
"title": ""
},
{
"docid": "c8ca57db545f2d1f70f3640651bb3e79",
"text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.",
"title": ""
},
{
"docid": "5350af2d42f9321338e63666dcd42343",
"text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.",
"title": ""
},
{
"docid": "9e10ca5f3776df0fe0ca41a8046adb27",
"text": "The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map that data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is important to reliably quantify their prediction accuracy. Cross-validation is the standard approach for evaluating the accuracy of such algorithms; however, several cross-validations methods exist and only some of them are statistically meaningful. Here we compared two popular cross-validation methods: record-wise and subject-wise. Using both a publicly available dataset and a simulation, we found that record-wise cross-validation often massively overestimates the prediction accuracy of the algorithms. We also found that this erroneous method is used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as erroneous results can mislead both clinicians and data scientists.",
"title": ""
},
{
"docid": "0ab60f1192919f636325b1341528ce78",
"text": "Efficient methods of processing unanticipated queries are a crucial prerequisite for the success of generalized database management systems. A wide variety of approaches to improve the performance of query evaluation algorithms have been proposed: logic-based and semantic transformations, fast implementations of basic operations, and combinatorial or heuristic algorithms for generating alternative access plans and choosing among them. These methods are presented in the framework of a general query evaluation procedure using the relational calculus representation of queries. In addition, nonstandard query optimization issues such as higher level query evaluation, query optimization in distributed databases, and use of database machines are addressed. The focus, however, is on query optimization in centralized database systems.",
"title": ""
},
{
"docid": "29844ddd38302be2180a98456c98a706",
"text": "Lichen planus is a chronic, inflammatory, autoimmune disease that affects the skin, oral mucosa, genital mucosa, scalp, and nails. Lichen planus lesions are described using the six P's (planar [flat-topped], purple, polygonal, pruritic, papules, plaques). Onset is usually acute, affecting the flexor surfaces of the wrists, forearms, and legs. The lesions are often covered by lacy, reticular, white lines known as Wickham striae. Classic cases of lichen planus may be diagnosed clinically, but a 4-mm punch biopsy is often helpful and is required for more atypical cases. High-potency topical corticosteroids are first-line therapy for all forms of lichen planus, including cutaneous, genital, and mucosal erosive lesions. In addition to clobetasol, topical tacrolimus appears to be an effective treatment for vulvovaginal lichen planus. Topical corticosteroids are also first-line therapy for mucosal erosive lichen planus. Systemic corticosteroids should be considered for severe, widespread lichen planus involving oral, cutaneous, or genital sites. Referral to a dermatologist for systemic therapy with acitretin (an expensive and toxic oral retinoid) or an oral immunosuppressant should be considered for patients with severe lichen planus that does not respond to topical treatment. Lichen planus may resolve spontaneously within one to two years, although recurrences are common. However, lichen planus on mucous membranes may be more persistent and resistant to treatment.",
"title": ""
},
{
"docid": "3d28f86795ddcd249657703cbedf87b1",
"text": "A 2.5V high precision BiCMOS bandgap reference with supply voltage range of 6V to 18V was proposed and realized. It could be applied to lots of Power Management ICs (Intergrated Circuits) due the high voltage. By introducing a preregulated current source, the PSRR (Power Supply Rejection Ratio) of 103dB at low frequency and the line regulation of 26.7μV/V was achieved under 15V supply voltage at ambient temperature of 27oC. Moreover, if the proper resistance trimming is implemented, the temperature coefficient could be reduced to less than 16.4ppm/oC. The start up time of the reference voltage could also be decreased with an additional bipolar and capacitor.",
"title": ""
},
{
"docid": "719ca13e95b9b4a1fc68772746e436d9",
"text": "The increased chance of deception in computer-mediated communication and the potential risk of taking action based on deceptive information calls for automatic detection of deception. To achieve the ultimate goal of automatic prediction of deception, we selected four common classification methods and empirically compared their performance in predicting deception. The deception and truth data were collected during two experimental studies. The results suggest that all of the four methods were promising for predicting deception with cues to deception. Among them, neural networks exhibited consistent performance and were robust across test settings. The comparisons also highlighted the importance of selecting important input variables and removing noise in an attempt to enhance the performance of classification methods. The selected cues offer both methodological and theoretical contributions to the body of deception and information systems research.",
"title": ""
},
{
"docid": "d7c236983c54213f17a0d8db886d5f2f",
"text": "Traffic light detection is an important system because it can alert driver on upcoming traffic light so that he/she can anticipate a head of time. In this paper we described our work on detecting traffic light color using machine learning approach. Using HSV color representation, our approach is to extract features based on an area of X×X pixels. Traffic light color model is then created by applying a learning algorithm on a set of examples of features representing pixels of traffic and non-traffic light colors. The learned model is then used to classify whether an area of pixels contains traffic light color or not. Evaluation of this approach reveals that it significantly improves the detection performance over the one based on value-range color segmentation technique.",
"title": ""
},
{
"docid": "8498a3240ae68bcd2b34e2b09cc1d7e2",
"text": "The impact of capping agents and environmental conditions (pH, ionic strength, and background electrolytes) on surface charge and aggregation potential of silver nanoparticles (AgNPs) suspensions were investigated. Capping agents are chemicals used in the synthesis of nanoparticles to prevent aggregation. The AgNPs examined in the study were as follows: (a) uncoated AgNPs (H(2)-AgNPs), (b) electrostatically stabilized (citrate and NaBH(4)-AgNPs), (c) sterically stabilized (polyvinylpyrrolidone (PVP)-AgNPs), and (d) electrosterically stabilized (branched polyethyleneimine (BPEI)-AgNPs)). The uncoated (H(2)-AgNPs), the citrate, and NaBH(4)-coated AgNPs aggregated at higher ionic strengths (100 mM NaNO(3)) and/or acidic pH (3.0). For these three nanomaterials, chloride (Cl(-), 10 mM), as a background electrolyte, resulted in a minimal change in the hydrodynamic diameter even at low pH (3.0). This was limited by the presence of residual silver ions, which resulted in the formation of stable negatively charged AgCl colloids. Furthermore, the presence of Ca(2+) (10 mM) resulted in aggregation of the three previously identified AgNPs regardless of the pH. As for PVP coated AgNPs, the ionic strength, pH and electrolyte type had no impact on the aggregation of the sterically stabilized AgNPs. The surface charge and aggregation of the BPEI coated AgNPs varied according to the solution pH.",
"title": ""
},
{
"docid": "6f989e22917aa2f99749701c8509fcca",
"text": "The reflection of an object can be distorted by undulations of the reflector, be it a funhouse mirror or a fluid surface. Painters and photographers have long exploited this effect, for example, in imaging scenery distorted by ripples on a lake. Here, we use this phenomenon to visualize micrometric surface waves generated as a millimetric droplet bounces on the surface of a vibrating fluid bath (Bush 2015b). This system, discovered a decade ago (Couder et al. 2005), is of current interest as a hydrodynamic quantum analog; specifically, the walking droplets exhibit several features reminiscent of quantum particles (Bush 2015a).",
"title": ""
},
{
"docid": "51ac4581fa82be87a28f7c080e026ae6",
"text": "III",
"title": ""
},
{
"docid": "2a61fe60671ec73cee769be4d8c59e0c",
"text": "With the rise of social media in our life, several decision makers have worked on these networks to make better decisions. In order to benefit from the data issued from these media, many researchers focused on helping companies understand how to perform a social media competitive analysis and transform these data into knowledge for decision makers. A high number of users interact at any time on different ways in social media such as by expressing their opinions about products, services or transaction related to the organization which can prove very helpful for making better projections. In this paper, we provide a literature review on data warehouse design approaches from social media. More precisely, we start by introducing the main concepts of data warehouse and social media. We also propose two classes of data warehouse design approaches from social media (behavior analysis and integration of sentiment analysis in data warehouse schema) and expose for each one the most representative existing works. Afterward, we propose a comparative study of the existing works.",
"title": ""
},
{
"docid": "58c25a0a600b7e59de5a85cb2b7faea9",
"text": "The increasing level of integration in electronic devices requires high density package substrates with good electrical and thermal performance, and high reliability. Organic laminate substrates have been serving these requirements with their continuous improvements in terms of the material characteristics and fabrication process to realize multi-layer fine pattern interconnects and small form factor. We present the advanced coreless laminate substrates in this paper including 3-layer thin substrate built by ETS (Embedded Trace Substrate) technology, 3-layer SUTC (Simmtech Ultra-Thin substrate with Carrier) for fan-out chip last package, and 3-layer coreless substrate with HSR (High modulus Solder Resist) for reduced warpage. We also present new coreless substrates up to 10 layers and substrate based on EMC. These new laminate substrates are used in many different applications such as application processors, memory, CMOS image sensors, touch screen controllers, MEMS, and RF SIP(System in Package) for over 70GHz applications. One common challenge for all these substrates is to minimize the warpage. The analysis and simulation techniques for the warpage control are presented.",
"title": ""
}
] |
scidocsrr
|
cd7b3ce8930358b018abd6cc3d75a957
|
DETECTING OUTLIERS BY USING TRIMMING CRITERION BASED ON ROBUST SCALE ESTIMATORS WITH SAS PROCEDURE 1
|
[
{
"docid": "4cb2d00d22f98da7e61800ac83d7ebdd",
"text": "Various statistical methods, developed after 1970, offer the opportunity to substantially improve upon the power and accuracy of the conventional t test and analysis of variance methods for a wide range of commonly occurring situations. The authors briefly review some of the more fundamental problems with conventional methods based on means; provide some indication of why recent advances, based on robust measures of location (or central tendency), have practical value; and describe why modern investigations dealing with nonnormality find practical problems when comparing means, in contrast to earlier studies. Some suggestions are made about how to proceed when using modern methods.",
"title": ""
}
] |
[
{
"docid": "8581de718d41373ee4250a300e675fb4",
"text": "It seems almost impossible to overstate the power of words; they literally have changed and will continue to change the course of world history. Perhaps the greatest tools we can give students for succeeding, not only in their education but more generally in life, is a large, rich vocabulary and the skills for using those words. Our ability to function in today’s complex social and economic worlds is mightily affected by our language skills and word knowledge. In addition to the vital importance of vocabulary for success in life, a large vocabulary is more specifically predictive and reflective of high levels of reading achievement. The Report of the National Reading Panel (2000), for example, concluded, “The importance of vocabulary knowledge has long been recognized in the development of reading skills. As early as 1924, researchers noted that growth in reading power relies on continuous growth in word knowledge” (pp. 4–15). Vocabulary or Vocabularies?",
"title": ""
},
{
"docid": "093cacb3e6f59529460093815ad2324b",
"text": "We examine the effectiveness in field settings of seven healthy eating nudges, classified according to whether they are 1) cognitively-oriented, such as “descriptive nutritional labeling,” “evaluative nutritional labeling,” or “visibility enhancements”; 2) affectively-oriented, such as “hedonic enhancements or “healthy eating calls”; or 3) behaviorally-oriented, such as “convenience enhancements” or “size enhancements.” Our multivariate three-level meta-analysis of 299 effect sizes, controlling for eating behavior, population, and study characteristics, yields a standardized mean difference (Cohen’s d) of .23 (equivalent to -124 kcal/day). Effect sizes increase as the focus of the nudges shifts from cognition (d=.12, -64 kcal) to affect (d=.24, -129 kcal) to behavior (d=.39, -209 kcal). Interventions are more effective at reducing unhealthy eating than increasing healthy eating or reducing total eating. Effect sizes are larger in the US than in other countries; in restaurants or cafeterias than in grocery stores; and in studies including a control group. Effect sizes are similar for food selection vs. consumption, for children vs. adults, and are independent of study duration. Compared to the typical nudge study (d=.12), one implementing the best nudge scenario can expect a six-fold increase in effectiveness (to d=.74), with half due to switching from cognitively-oriented to behaviorally-oriented nudges.",
"title": ""
},
{
"docid": "d6d30dbba9153bcc86ed8a4337821b78",
"text": "Multiplayer video streaming scenario can be seen everywhere today as the video traffic is becoming the “killer” traffic over the Internet. The Quality of Experience fairness is critical for not only the users but also the content providers and ISP. Consequently, a QoE fairness adaptive method of multiplayer video streaming is of great importance. Previous studies focus on client-side solutions without network global view or network-assisted solution with extra reaction to client. In this paper, a pure network-based architecture using SDN is designed for monitoring network global performance information. With the flexible programming and network mastery capacity of SDN, we propose an online Q-learning-based dynamic bandwidth allocation algorithm Q-FDBA with the goal of QoE fairness. The results show the Q-FDBA could adaptively react to high frequency of bottleneck bandwidth switches and achieve better QoE fairness within a certain time dimension.",
"title": ""
},
{
"docid": "b629ae23b7351c59c55ee9e9f1a33117",
"text": "75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 Tthe treatment of chronic hepatitis C virus (HCV) infection has been nothing short of remarkable with the prospect of elimination never more within reach. Attention has shifted to the safety and efficacy of DAAs in special populations, such as hepatitis B virus (HBV)/HCV coinfected individuals. Although the true prevalence of coinfection is unknown, studies from the United States report that 1.4% to 5.8% of HCV-infected individuals are hepatitis B surface antigen (HBsAg) positive compared with 1.4% to 4.1% in China. Coinfection is associated with higher rates of cirrhosis, decompensation, and hepatocellular carcinoma compared with monoinfected individuals. Because HBsAgpositive individuals were excluded from clinical trials of DAAs, HBV reactivation after HCV clearance was only reported after DAAs entered clinical use. Reports of severe and even fatal cases led the US Food and Drug Administration (FDA) to issue a strong directive regarding the risk of HBV reactivation with DAA treatment. The FDA boxed warning was based on 29 cases of HBV reactivation, including 2 fatal events and one that led to liver transplantation. However, owing to the nature of postapproval reporting, critical data were often missing, including baseline HBV serology, making it difficult to truly assess the risk. To err on the safe side, the FDA recommended screening all individuals scheduled to receive DAAs for evidence of current or past HBV infection with follow-up HBV DNA testing for any positive serology. Differing recommendations from international guidelines left clinicians unsure of how to proceed. The study by Liu et al in this issue of Gastroenterology provides much-needed data regarding the risk of HBV reactivation in coinfected individuals treated with DAAs. This prospective study enrolled 111 patients with HBV/ HCV coinfection who received sofosbuvir/ledipasvir for 12 weeks. Notably, although 61% were infected with HCV genotype 1, 39% had genotype 2 infection, a group for whom sofosbuvir/ledipasvir is not currently recommended. All patients achieved sustained virologic response (SVR). More important, the authors carefully evaluated what happened to HBV during and after HCV therapy. Patients were divided into 2 groups: those with undetectable HBV DNA and those with an HBV DNA of >20 IU/mL at baseline. Increases in HBV DNA levels were common in both groups. DNA increased to quantifiable levels in 31 of 37 initially",
"title": ""
},
{
"docid": "3084181a8f29e281ed3d68f8c9a67aee",
"text": "Object detection with deep neural networks is often performed by passing a few thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result.",
"title": ""
},
{
"docid": "380f29e386a69bee3c4187950f41cfaf",
"text": "Human action recognition is one of the most active research areas in both computer vision and machine learning communities. Several methods for human action recognition have been proposed in the literature and promising results have been achieved on the popular datasets. However, the comparison of existing methods is often limited given the different datasets, experimental settings, feature representations, and so on. In particularly, there are no human action dataset that allow concurrent analysis on three popular scenarios, namely single view, cross view, and cross domain. In this paper, we introduce a Multi-modal & Multi-view & Interactive (M2I) dataset, which is designed for the evaluation of the performances of human action recognition under multi-view scenario. This dataset consists of 1760 action samples, including 9 person-person interaction actions and 13 person-object interaction actions. Moreover, we respectively evaluate three representative methods for the single-view, cross-view, and cross domain human action recognition on this dataset with the proposed evaluation protocol. It is experimentally demonstrated that this dataset is extremely challenging due to large intraclass variation, multiple similar actions, significant view difference. This benchmark can provide solid basis for the evaluation of this task and will benefit advancing related computer vision and machine learning research topics.",
"title": ""
},
{
"docid": "8b908e2c7ed644371b37792a96207401",
"text": "Most websites, services, and applications have come to rely on Internet services (e.g., DNS, CDN, email, WWW, etc.) offered by third parties. Although employing such services generally improves reliability and cost-effectiveness, it also creates dependencies on service providers, which may expose websites to additional risks, such as DDoS attacks or cascading failures. As cloud services are becoming more popular, an increasing percentage of the overall Internet ecosystem relies on a decreasing number of highly popular services. In our general effort to assess the security risk for a given entity, and motivated by the effects of recent service disruptions, we perform a large-scale analysis of passive and active DNS datasets including more than 2.5 trillion queries in order to discover the dependencies between websites and Internet services.\n In this paper, we present the findings of our DNS dataset analysis, and attempt to expose important insights about the ecosystem of dependencies. To further understand the nature of dependencies, we perform graph-theoretic analysis on the dependency graph and propose support power, a novel power measure that can quantify the amount of dependence websites and other services have on a particular service. Our DNS analysis findings reveal that the current service ecosystem is dominated by a handful of popular service providers---with Amazon being the leader, by far---whose popularity is steadily increasing. These findings are further supported by our graph analysis results, which also reveals a set of less-popular services that many (regional) websites depend on.",
"title": ""
},
{
"docid": "53e8333b3e4e9874449492852d948ea2",
"text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.",
"title": ""
},
{
"docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d",
"text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.",
"title": ""
},
{
"docid": "fd317c492ed68bf14bdef38c27ed6696",
"text": "The systematic study of subcellular location patterns is required to fully characterize the human proteome, as subcellular location provides critical context necessary for understanding a protein's function. The analysis of tens of thousands of expressed proteins for the many cell types and cellular conditions under which they may be found creates a need for automated subcellular pattern analysis. We therefore describe the application of automated methods, previously developed and validated by our laboratory on fluorescence micrographs of cultured cell lines, to analyze subcellular patterns in tissue images from the Human Protein Atlas. The Atlas currently contains images of over 3000 protein patterns in various human tissues obtained using immunohistochemistry. We chose a 16 protein subset from the Atlas that reflects the major classes of subcellular location. We then separated DNA and protein staining in the images, extracted various features from each image, and trained a support vector machine classifier to recognize the protein patterns. Our results show that our system can distinguish the patterns with 83% accuracy in 45 different tissues, and when only the most confident classifications are considered, this rises to 97%. These results are encouraging given that the tissues contain many different cell types organized in different manners, and that the Atlas images are of moderate resolution. The approach described is an important starting point for automatically assigning subcellular locations on a proteome-wide basis for collections of tissue images such as the Atlas.",
"title": ""
},
{
"docid": "783c8ddc0245b3f8a263cfd6593b10df",
"text": "Implicit probabilistic models are a flexible class for modeling data. They define a process to simulate observations, and unlike traditional models, they do not require a tractable likelihood function. In this paper, we develop two families of models: hierarchical implicit models and deep implicit models. They combine the idea of implicit densities with hierarchical Bayesian modeling and deep neural networks. The use of implicit models with Bayesian analysis has been limited by our ability to perform accurate and scalable inference. We develop likelihood-free variational inference (LFVI). Key to LFVI is specifying a variational family that is also implicit. This matches the model’s flexibility and allows for accurate approximation of the posterior. Our work scales up implicit models to sizes previously not possible and advances their modeling design. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial network for discrete data; and a deep implicit model for text generation.",
"title": ""
},
{
"docid": "87c7875416503ab1f12de90a597959a4",
"text": "Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.",
"title": ""
},
{
"docid": "fd42a330222290652741553f95d361f4",
"text": "Neuroanatomy places critical constraints on the functional connectivity of the cerebral cortex. To analyze these constraints we have examined the relationship between structural features of networks (expressed as graphs) and the patterns of functional connectivity to which they give rise when implemented as dynamical systems. We selected among structurally varying graphs using as selective criteria a number of global information-theoretical measures that characterize functional connectivity. We selected graphs separately for increases in measures of entropy (capturing statistical independence of graph elements), integration (capturing their statistical dependence) and complexity (capturing the interplay between their functional segregation and integration). We found that dynamics with high complexity were supported by graphs whose units were organized into densely linked groups that were sparsely and reciprocally interconnected. Connection matrices based on actual neuroanatomical data describing areas and pathways of the macaque visual cortex and the cat cortex showed structural characteristics that coincided best with those of such complex graphs, revealing the presence of distinct but interconnected anatomical groupings of areas. Moreover, when implemented as dynamical systems, these cortical connection matrices generated functional connectivity with high complexity, characterized by the presence of highly coherent functional clusters. We also found that selection of graphs as they responded to input or produced output led to increases in the complexity of their dynamics. We hypothesize that adaptation to rich sensory environments and motor demands requires complex dynamics and that these dynamics are supported by neuroanatomical motifs that are characteristic of the cerebral cortex.",
"title": ""
},
{
"docid": "f10eb96de9181085e249fdca1f4a568d",
"text": "This paper argues that tracking, object detection, and model building are all similar activities. We describe a fully automatic system that builds 2D articulated models known as pictorial structures from videos of animals. The learned model can be used to detect the animal in the original video - in this sense, the system can be viewed as a generalized tracker (one that is capable of modeling objects while tracking them). The learned model can be matched to a visual library; here, the system can be viewed as a video recognition algorithm. The learned model can also be used to detect the animal in novel images - in this case, the system can be seen as a method for learning models for object recognition. We find that we can significantly improve the pictorial structures by augmenting them with a discriminative texture model learned from a texture library. We develop a novel texture descriptor that outperforms the state-of-the-art for animal textures. We demonstrate the entire system on real video sequences of three different animals. We show that we can automatically track and identify the given animal. We use the learned models to recognize animals from two data sets; images taken by professional photographers from the Corel collection, and assorted images from the Web returned by Google. We demonstrate quite good performance on both data sets. Comparing our results with simple baselines, we show that, for the Google set, we can detect, localize, and recover part articulations from a collection demonstrably hard for object recognition",
"title": ""
},
{
"docid": "10634117fd51d94f9b12b9f0ed034f65",
"text": "Our corpus of descriptive text contains a significant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner’s theory of the attentional state, and in particular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to conclude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status.",
"title": ""
},
{
"docid": "7c1ec6659380006b70a4004d44c5150f",
"text": "Cross-modal matching methods match data from different modalities according to their similarities. Most existing methods utilize label information to reduce the semantic gap between different modalities. However, it is usually time-consuming to manually label large-scale data. This paper proposes a Self-Paced Cross-Modal Subspace Matching (SCSM) method for unsupervised multimodal data. We assume that multimodal data are pair-wised and from several semantic groups, which form hard pair-wised constraints and soft semantic group constraints respectively. Then, we formulate the unsupervised cross-modal matching problem as a non-convex joint feature learning and data grouping problem. Self-paced learning, which learns samples from 'easy' to 'complex', is further introduced to refine the grouping result. Moreover, a multimodal graph is constructed to preserve the relationship of both inter- and intra-modality similarity. An alternating minimization method is employed to minimize the non-convex optimization problem, followed by the discussion on its convergence analysis and computational complexity. Experimental results on four multimodal databases show that SCSM outperforms state-of-the-art cross-modal subspace learning methods.",
"title": ""
},
{
"docid": "c210a68c57d7bfb15c7f646c3d890cd8",
"text": "Motion capture is frequently used for studies in biomechanics, and has proved particularly useful in understanding human motion. Unfortunately, motion capture approaches often fail when markers are occluded or missing and a mechanism by which the position of missing markers can be estimated is highly desirable. Of particular interest is the problem of estimating missing marker positions when no prior knowledge of marker placement is known. Existing approaches to marker completion in this scenario can be broadly divided into tracking approaches using dynamical modelling, and low rank matrix completion. This paper shows that these approaches can be combined to provide a marker completion algorithm that not only outperforms its respective components, but also solves the problem of incremental position error typically associated with tracking approaches.",
"title": ""
},
{
"docid": "74fd21dccc9e883349979c8292c5f450",
"text": "Stack Overflow (SO) has been a great source of natural language questions and their code solutions (i.e., question-code pairs), which are critical for many tasks including code retrieval and annotation. In most existing research, question-code pairs were collected heuristically and tend to have low quality. In this paper, we investigate a new problem of systematically mining question-code pairs from Stack Overflow (in contrast to heuristically collecting them). It is formulated as predicting whether or not a code snippet is a standalone solution to a question. We propose a novel Bi-View Hierarchical Neural Network which can capture both the programming content and the textual context of a code snippet (i.e., two views) to make a prediction. On two manually annotated datasets in Python and SQL domain, our framework substantially outperforms heuristic methods with at least 15% higher F1 and accuracy. Furthermore, we present StaQC (Stack Overflow Question-Code pairs), the largest dataset to date of ∼148K Python and ∼120K SQL question-code pairs, automatically mined from SO using our framework. Under various case studies, we demonstrate that StaQC can greatly help develop data-hungry models for associating natural language with programming language1.",
"title": ""
},
{
"docid": "4c877ad8e2f8393526514b12ff992ca0",
"text": "The squared-field-derivative method for calculating eddy-current (proximity-effect) losses in round-wire or litz-wire transformer and inductor windings is derived. The method is capable of analyzing losses due to two-dimensional and three-dimensional field effects in multiple windings with arbitrary waveforms in each winding. It uses a simple set of numerical magnetostatic field calculations, which require orders of magnitude less computation time than numerical eddy-current solutions, to derive a frequency-independent matrix describing the transformer or inductor. This is combined with a second, independently calculated matrix, based on derivatives of winding currents, to compute total ac loss. Experiments confirm the accuracy of the method.",
"title": ""
},
{
"docid": "4059c52f56810a463e07f7ed0e00e8ce",
"text": "Conservation and maintenance of historic buildings have exceptional requirements and need a detailed diagnosis and an accurate as-is documentation. This paper reports the use of Unmanned Aerial Vehicle (UAV) imagery to create an Intelligent Digital Built Heritage Model (IDBHM) based on Building Information Modeling (BIM) technology. Our work outlines a model-driven approach based on UAV data acquisition, photogrammetry, post-processing and segmentation of point clouds to promote partial automation of BIM modeling process. The methodology proposed was applied to a historical building facade located in Brazil. A qualitative and quantitative assessment of the proposed segmentation method was undertaken through the comparison between segmented clusters and as-designed documents, also as between point clouds and ground control points. An accurate and detailed parametric IDBHM was created from high-resolution Dense Surface Model (DSM). This Model can improve conservation and rehabilitation works. The results demonstrate that the proposed approach yields good results in terms of effectiveness in the clusters segmentation, compared to the as-designed",
"title": ""
}
] |
scidocsrr
|
051c5f835af764e782465b3db6c8a188
|
Determinants of accepting wireless mobile data services in China
|
[
{
"docid": "717bb81a5000035b1199eeb3b2308518",
"text": "Technology acceptance research has tended to focus on instrumental beliefs such as perceived usefulness and perceived ease of use as drivers of usage intentions, with technology characteristics as major external stimuli. Behavioral sciences and individual psychology, however, suggest that social influences and personal traits such as individual innovativeness are potentially important determinants of adoption as well, and may be a more important element in potential adopters’ decisions. This paper models and tests these relationships in non-work settings among several latent constructs such as intention to adopt wireless mobile technology, social influences, and personal innovativeness. Structural equation analysis reveals strong causal relationships between the social influences, personal innovativeness and the perceptual beliefs—usefulness and ease of use, which in turn impact adoption intentions. The paper concludes with some important implications for both theory research and implementation strategies. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "7317713e6725f6541e4197cb02525cd4",
"text": "This survey describes the current state-of-the-art in the development of automated visual surveillance systems so as to provide researchers in the field with a summary of progress achieved to date and to identify areas where further research is needed. The ability to recognise objects and humans, to describe their actions and interactions from information acquired by sensors is essential for automated visual surveillance. The increasing need for intelligent visual surveillance in commercial, law enforcement and military applications makes automated visual surveillance systems one of the main current application domains in computer vision. The emphasis of this review is on discussion of the creation of intelligent distributed automated surveillance systems. The survey concludes with a discussion of possible future directions.",
"title": ""
},
{
"docid": "b5f9535fb63cae3d115e1e5bded4795c",
"text": "This study uses a hostage negotiation setting to demonstrate how a team of strategic police officers can utilize specific coping strategies to minimize uncertainty at different stages of their decision-making in order to foster resilient decision-making to effectively manage a high-risk critical incident. The presented model extends the existing research on coping with uncertainty by (1) applying the RAWFS heuristic (Lipshitz and Strauss in Organ Behav Human Decis Process 69:149–163, 1997) of individual decision-making under uncertainty to a team critical incident decision-making domain; (2) testing the use of various coping strategies during “in situ” team decision-making by using a live simulated hostage negotiation exercise; and (3) including an additional coping strategy (“reflection-in-action”; Schön in The reflective practitioner: how professionals think in action. Temple Smith, London, 1983) that aids naturalistic team decision-making. The data for this study were derived from a videoed strategic command meeting held within a simulated live hostage training event; these video data were coded along three themes: (1) decision phase; (2) uncertainty management strategy; and (3) decision implemented or omitted. Results illustrate that, when assessing dynamic and high-risk situations, teams of police officers cope with uncertainty by relying on “reduction” strategies to seek additional information and iteratively update these assessments using “reflection-in-action” (Schön 1983) based on previous experience. They subsequently progress to a plan formulation phase and use “assumption-based reasoning” techniques in order to mentally simulate their intended courses of action (Klein et al. 2007), and identify a preferred formulated strategy through “weighing the pros and cons” of each option. In the unlikely event that uncertainty persists to the plan execution phase, it is managed by “reduction” in the form of relying on plans and standard operating procedures or by “forestalling” and intentionally deferring the decision while contingency planning for worst-case scenarios.",
"title": ""
},
{
"docid": "4ec7af75127df22c9cb7bd279cb2bcf3",
"text": "This paper describes a real-time walking control system developed for the biped robots JOHNNIE and LOLA. Walking trajectories are planned on-line using a simplified robot model and modified by a stabilizing controller. The controller uses hybrid position/force control in task space based on a resolved motion rate scheme. Inertial stabilization is achieved by modifying the contact force trajectories. The paper includes an analysis of the dynamics of controlled bipeds, which is the basis for the proposed control system. The system was tested both in forward dynamics simulations and in experiments with JOHNNIE.",
"title": ""
},
{
"docid": "c69c1ea60dd096005fa8a1d3b21d69ed",
"text": "The presentation of information is a very important part of the comprehension of the whole. Therefore, the chosen visualization technique should be compatible with the content to be presented. An easy and fast visualization of the subjects developed by a research group, during certain periods, requires a dynamic visualization technique such as the Animated Word Cloud. With this technique, we were able to use the titles of bibliographic publications of researchers to present, in a clear and straightforward manner, information that is not easily evident just by reading its title. The synchronization of the videos generated from the Animated Word Clouds allows a deeper analysis, a quick and intuitive observation, and the perception of information presented simultaneously.",
"title": ""
},
{
"docid": "49dc0f1c63cbccf1fac793b8514cb59e",
"text": "The emergence of MIMO antennas and channel bonding in 802.11n wireless networks has resulted in a huge leap in capacity compared with legacy 802.11 systems. This leap, however, adds complexity to selecting the right transmission rate. Not only does the appropriate data rate need to be selected, but also the MIMO transmission technique (e.g., Spatial Diversity or Spatial Multiplexing), the number of streams, and the channel width. Incorporating these features into a rate adaptation (RA) solution requires a new set of rules to accurately evaluate channel conditions and select the appropriate transmission setting with minimal overhead. To address these challenges, we propose ARAMIS (Agile Rate Adaptation for MIMO Systems), a standard-compliant, closed-loop RA solution that jointly adapts rate and bandwidth. ARAMIS adapts transmission rates on a per-packet basis; we believe it is the first 802.11n RA algorithm that simultaneously adapts rate and channel width. We have implemented ARAMIS on Atheros-based devices and deployed it on our 15-node testbed. Our experiments show that ARAMIS accurately adapts to a wide variety of channel conditions with negligible overhead. Furthermore, ARAMIS outperforms existing RA algorithms in 802.11n environments with up to a 10 fold increase in throughput.",
"title": ""
},
{
"docid": "71f388d3a2b50856c5529667df39602c",
"text": "Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.",
"title": ""
},
{
"docid": "12b855b39278c49d448fbda9aa56cacf",
"text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.",
"title": ""
},
{
"docid": "788501e065d2901e6a85287d62b4c941",
"text": "D-amino acid oxidase (DAO) is a flavoenzyme that metabolizes certain D-amino acids, notably the endogenous N-methyl D-aspartate receptor (NMDAR) co-agonist, D-serine. As such, it has the potential to modulate the function of NMDAR and to contribute to the widely hypothesized involvement of NMDAR signalling in schizophrenia. Three lines of evidence now provide support for this possibility: DAO shows genetic associations with the disorder in several, although not all, studies; the expression and activity of DAO are increased in schizophrenia; and DAO inactivation in rodents produces behavioural and biochemical effects, suggestive of potential therapeutic benefits. However, several key issues remain unclear. These include the regional, cellular and subcellular localization of DAO, the physiological importance of DAO and its substrates other than D-serine, as well as the causes and consequences of elevated DAO in schizophrenia. Herein, we critically review the neurobiology of DAO, its involvement in schizophrenia, and the therapeutic value of DAO inhibition. This review also highlights issues that have a broader relevance beyond DAO itself: how should we weigh up convergent and cumulatively impressive, but individually inconclusive, pieces of evidence regarding the role that a given gene may have in the aetiology, pathophysiology and pharmacotherapy of schizophrenia?",
"title": ""
},
{
"docid": "81d933a449c0529ab40f5661f3b1afa1",
"text": "Scene classification plays a key role in interpreting the remotely sensed high-resolution images. With the development of deep learning, supervised learning in classification of Remote Sensing with convolutional networks (CNNs) has been frequently adopted. However, researchers paid less attention to unsupervised learning in remote sensing with CNNs. In order to filling the gap, this paper proposes a set of CNNs called Multiple lAyeR feaTure mAtching(MARTA) generative adversarial networks (GANs) to learn representation using only unlabeled data. There will be two models of MARTA GANs involved: (1) a generative model G that captures the data distribution and provides more training data; (2) a discriminative model D that estimates the possibility that a sample came from the training data rather than G and in this way a well-formed representation of dataset can be learned. Therefore, MARTA GANs obtain the state-of-the-art results which outperform the results got from UC-Merced Land-use dataset and Brazilian Coffee Scenes dataset.",
"title": ""
},
{
"docid": "bd3f7e8e4416f67cb6e26ce0575af624",
"text": "Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.",
"title": ""
},
{
"docid": "e50c07aa28cafffc43dd7eb29892f10f",
"text": "Recent approaches to the Automatic Postediting (APE) of Machine Translation (MT) have shown that best results are obtained by neural multi-source models that correct the raw MT output by also considering information from the corresponding source sentence. To this aim, we present for the first time a neural multi-source APE model based on the Transformer architecture. Moreover, we employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics used for the task. These are the main features of our submissions to the WMT 2018 APE shared task, where we participated both in the PBSMT subtask (i.e. the correction of MT outputs from a phrase-based system) and in the NMT subtask (i.e. the correction of neural outputs). In the first subtask, our system improves over the baseline up to -5.3 TER and +8.23 BLEU points ranking second out of 11 submitted runs. In the second one, characterized by the higher quality of the initial translations, we report lower but statistically significant gains (up to -0.38 TER and +0.8 BLEU), ranking first out of 10 submissions.",
"title": ""
},
{
"docid": "5f0d4437ea08a4f0946ca04db7359ebc",
"text": "The photosynthesis of previtamin D3 from 7-dehydrocholesterol in human skin was determined after exposure to narrow-band radiation or simulated solar radiation. The optimum wavelengths for the production of previtamin D3 were determined to be between 295 and 300 nanometers. When human skin was exposed to 295-nanometer radiation, up to 65 percent of the original 7-dehydrocholesterol content was converted to previtamin D3. In comparison, when adjacent skin was exposed to simulated solar radiation, the maximum formation of previtamin D3 was about 20 percent. Major differences in the formation of lumisterol3, and tachysterol3 from previtamin D3 were also observed. It is concluded that the spectral character of natural sunlight has a profound effect on the photochemistry of 7-dehydrocholesterol in human skin.",
"title": ""
},
{
"docid": "3fb8519ca0de4871b105df5c5d8e489f",
"text": "Intra-Body Communication (IBC), which modulates ionic currents over the human body as the communication medium, offers a low power and reliable signal transmission method for information exchange across the body. This paper first briefly reviews the quasi-static electromagnetic (EM) field modeling for a galvanic-type IBC human limb operating below 1 MHz and obtains the corresponding transfer function with correction factor using minimum mean square error (MMSE) technique. Then, the IBC channel characteristics are studied through the comparison between theoretical calculations via this transfer function and experimental measurements in both frequency domain and time domain. High pass characteristics are obtained in the channel gain analysis versus different transmission distances. In addition, harmonic distortions are analyzed in both baseband and passband transmissions for square input waves. The experimental results are consistent with the calculation results from the transfer function with correction factor. Furthermore, we also explore both theoretical and simulation results for the bit-error-rate (BER) performance of several common modulation schemes in the IBC system with a carrier frequency of 500 kHz. It is found that the theoretical results are in good agreement with the simulation results.",
"title": ""
},
{
"docid": "4f9b66eb63cd23cd6364992759269a2c",
"text": "In this paper, we present the concept of diffusing models to perform image-to-image matching. Having two images to match, the main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes. We illustrate this concept by an analogy with Maxwell's demons. We show that this concept relates to more traditional ones, based on attraction, with an intermediate step being optical flow techniques. We use the concept of diffusing models to derive three different non-rigid matching algorithms, one using all the intensity levels in the static image, one using only contour points, and a last one operating on already segmented images. Finally, we present results with synthesized deformations and real medical images, with applications to heart motion tracking and three-dimensional inter-patients matching.",
"title": ""
},
{
"docid": "b8def6380ef69091bec0d4e7b5442f57",
"text": "In a number of key IC fabrication steps in-process wafers are sensitive to moisture, oxygen and other airborne molecular contaminants in the air. Nitrogen purge of closed Front Opening Unified Pods (FOUP) have been implemented in many fabs to minimize wafer's exposure to the contaminants (or CDA purge if oxygen is not of concern). As the technology node advances, the need for minimizing the exposure has become even more stringent and in some processes requires FOUP purge while the FOUP door is off on an EFEM loadport. This requirement brings unique challenges to FOUP purge, especially at the front locations near FOUP opening, where EFEM air constantly tries to enter the FOUP. In this paper we present Entegris' latest experimental study on understanding the unique challenges of FOUP door-off purge and the excellent test results of newly designed advanced FOUP with purge flow distribution manifolds (diffusers).",
"title": ""
},
{
"docid": "3a4841b9aefdd0f96125132eaabdac49",
"text": "Unstructured text data produced on the internet grows rapidly, and sentiment analysis for short texts becomes a challenge because of the limit of the contextual information they usually contain. Learning good vector representations for sentences is a challenging task and an ongoing research area. Moreover, learning long-term dependencies with gradient descent is difficult in neural network language model because of the vanishing gradients problem. Natural Language Processing (NLP) systems traditionally treat words as discrete atomic symbols; the model can leverage small amounts of information regarding the relationship between the individual symbols. In this paper, we propose ConvLstm, neural network architecture that employs Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) on top of pre-trained word vectors. In our experiments, ConvLstm exploit LSTM as a substitute of pooling layer in CNN to reduce the loss of detailed local information and capture long term dependencies in sequence of sentences. We validate the proposed model on two sentiment datasets IMDB, and Stanford Sentiment Treebank (SSTb). Empirical results show that ConvLstm achieved comparable performances with less parameters on sentiment analysis tasks.",
"title": ""
},
{
"docid": "109b1ec344802099e833a5988832945b",
"text": "In this paper, we consider the problem of learning representations for authors from bibliographic co-authorship networks. Existing methods for deep learning on graphs, such as DeepWalk, suffer from link sparsity problem as they focus on modeling the link information only. We hypothesize that capturing both the content and link information in a unified way will help mitigate the sparsity problem. To this end, we present a novel model ‘Author2Vec’ , which learns lowdimensional author representations such that authors who write similar content and share similar network structure are closer in vector space. Such embeddings are useful in a variety of applications such as link prediction, node classification, recommendation and visualization. The author embeddings we learn are empirically shown to outperform DeepWalk by 2.35% and 0.83% for link prediction and clustering task respectively.",
"title": ""
},
{
"docid": "fe446f500549cedce487b78a133cbc45",
"text": "Drug addiction manifests as a compulsive drive to take a drug despite serious adverse consequences. This aberrant behaviour has traditionally been viewed as bad 'choices' that are made voluntarily by the addict. However, recent studies have shown that repeated drug use leads to long-lasting changes in the brain that undermine voluntary control. This, combined with new knowledge of how environmental, genetic and developmental factors contribute to addiction, should bring about changes in our approach to the prevention and treatment of addiction.",
"title": ""
},
{
"docid": "5f526d3ac8329fb801ece415f78eb343",
"text": "Usability evaluation is an increasingly important part of the user interface design process. However, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. This article presents an extensive survey of usability evaluation methods, organized according to a new taxonomy that emphasizes the role of automation. The survey analyzes existing techniques, identifies which aspects of usability evaluation automation are likely to be of use in future research, and suggests new ways to expand existing approaches to better support usability evaluation.",
"title": ""
},
{
"docid": "2b4b822d722fac299ae7504078d87fd0",
"text": "LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us (letor@microsoft.com). Our goal is to make the dataset reliable and useful for the community.",
"title": ""
}
] |
scidocsrr
|
27bf1823e7774f0fa06e20ae221be99c
|
Named Entity Recognition in Bengali: A Conditional Random Field Approach
|
[
{
"docid": "ab25d07bd7f1daa44bb3dcb5401756a2",
"text": "Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character ngrams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; Borthwick et al., 1998). Conditional Random Fields (CRFs) (Lafferty et al., 2001) are undirected graphical models, a special case of which correspond to conditionally-trained finite state machines. While based on the same exponential form as maximum entropy models, they have efficient procedures for complete, non-greedy finite-state inference and training. CRFs have shown empirical successes recently in POS tagging (Lafferty et al., 2001), noun phrase segmentation (Sha and Pereira, 2003) and Chinese word segmentation (McCallum and Feng, 2003). Given these models’ great flexibility to include a wide array of features, an important question that remains is what features should be used? For example, in some cases capturing a word tri-gram is important, however, there is not sufficient memory or computation to include all word tri-grams. As the number of overlapping atomic features increases, the difficulty and importance of constructing only certain feature combinations grows. This paper presents a feature induction method for CRFs. Founded on the principle of constructing only those feature conjunctions that significantly increase loglikelihood, the approach builds on that of Della Pietra et al (1997), but is altered to work with conditional rather than joint probabilities, and with a mean-field approximation and other additional modifications that improve efficiency specifically for a sequence model. In comparison with traditional approaches, automated feature induction offers both improved accuracy and significant reduction in feature count; it enables the use of richer, higherorder Markov models, and offers more freedom to liberally guess about which atomic features may be relevant to a task.",
"title": ""
}
] |
[
{
"docid": "05a543846b5275f46be63e1b472b295e",
"text": "Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI) were compared for monitoring live fuel moisture in a shrubland ecosystem. Both indices were calculated from 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data covering a 33 month period from 2000 to 2002. Both NDVI and NDWI were positively correlated with live fuel moisture measured by the Los Angeles County Fire Department (LACFD). NDVI had R values ranging between 0.25 to 0.60, while NDWI had significantly higher R values, varying between 0.39 and 0.80. Water absorption measures, such as NDWI, may prove more appropriate for monitoring live fuel moisture than measures of chlorophyll absorption such as NDV",
"title": ""
},
{
"docid": "582cae6ea4776c7e74923cfe70bab0ad",
"text": "An increasing number of people are using dating websites to search for their life partners. This leads to the curiosity of how attractive a specific person is to the opposite gender on an average level. We propose a novel algorithm to evaluate people's objective attractiveness based on their interactions with other users on the dating websites and implement machine learning algorithms to predict their objective attractiveness ratings from their profiles. We validate our method on a large dataset gained from a Japanese dating website and yield convincing results. Our prediction based on users' profiles, which includes image and text contents, is over 80% correlated with the real values of the calculated objective attractiveness for the female and over 50% correlated with the real values of the calculated objective attractiveness for the male.",
"title": ""
},
{
"docid": "7974885ccc886fb307dfdb98606951ed",
"text": "We examined whether the male spatial advantage varies across children from different socioeconomic (SES) groups. In a longitudinal study, children were administered two spatial tasks requiring mental transformations and a syntax comprehension task in the fall and spring of second and third grades. Boys from middle- and high-SES backgrounds outperformed their female counterparts on both spatial tasks, whereas boys and girls from a low-SES group did not differ in their performance level on these tasks. As expected, no sex differences were found on the verbal comprehension task. Prior studies have generally been based on the assumption that the male spatial advantage reflects ability differences in the population as a whole. Our finding that the advantage is sensitive to variations in SES provides a challenge to this assumption, and has implications for a successful explanation of the sex-related difference in spatial skill.",
"title": ""
},
{
"docid": "990d15bd9b79f6ec67eb04394d5791d7",
"text": "Spirituality is currently widely studied in the field of Psychology; and Filipinos are known for having a deep sense of spirituality. In terms of measuring spirituality however, researchers argue that measures or scales about it should reflect greater sensitivity to cultural characteristics and issues (Hill & Pargament, 2003). The study aimed to develop a measure of Filipino spirituality. Specifically, it intended to identify salient dimensions of spirituality among Filipinos. The study had two phases in the development of the scale, namely: focus group discussion (FGD) on the Filipino conceptions of spirituality as a basis for generating items; and test development, which included item construction based on the FGD and the literature, pilot testing, establishing reliability and validity of the scale. Qualitative results showed that spirituality has 3 main themes: connectedness with the sacred, sense of meaning and purpose, and expressions of spirituality. In the test development, the Filipino spirituality scale yielded two factors. The first factor—having a relationship or connectedness with a supreme being with a 53.13 % total variance; while the other factor of good relationship with others had a 7.196%. The reliability of the whole measure yielded cronbach alpha of 0.978, while the factors also obtained good reliability of indicators of 0.986 and 0.778 respectively. The results of the study are discussed in the broader conceptualization of spirituality in the Philippines as well as in mainstream Psychology.",
"title": ""
},
{
"docid": "7bb0ea76acaf4e23312ae62d0b6321db",
"text": "The European honey bee exploits floral resources efficiently and may therefore compete with solitary wild bees. Hence, conservationists and bee keepers are debating about the consequences of beekeeping for the conservation of wild bees in nature reserves. We observed flower-visiting bees on flowers of Calluna vulgaris in sites differing in the distance to the next honey-bee hive and in sites with hives present and absent in the Lüneburger Heath, Germany. Additionally, we counted wild bee ground nests in sites that differ in their distance to the next hive and wild bee stem nests and stem-nesting bee species in sites with hives present and absent. We did not observe fewer honey bees or higher wild bee flower visits in sites with different distances to the next hive (up to 1,229 m). However, wild bees visited fewer flowers and honey bee visits increased in sites containing honey-bee hives and in sites containing honey-bee hives we found fewer stem-nesting bee species. The reproductive success, measured as number of nests, was not affected by distance to honey-bee hives or their presence but by availability and characteristics of nesting resources. Our results suggest that beekeeping in the Lüneburg Heath can affect the conservation of stem-nesting bee species richness but not the overall reproduction either of stem-nesting or of ground-nesting bees. Future experiments need control sites with larger distances than 500 m to hives. Until more information is available, conservation efforts should forgo to enhance honey bee stocking rates but enhance the availability of nesting resources.",
"title": ""
},
{
"docid": "b8bcd83f033587533d7502c54a2b67da",
"text": "The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges.",
"title": ""
},
{
"docid": "4952d426d0f2aed1daea234595dcd901",
"text": "Clustering analysis is a primary method for data mining. Density clustering has such advantages as: its clusters are easy to understand and it does not limit itself to shapes of clusters. But existing density-based algorithms have trouble in finding out all the meaningful clusters for datasets with varied densities. This paper introduces a new algorithm called VDBSCAN for the purpose of varied-density datasets analysis. The basic idea of VDBSCAN is that, before adopting traditional DBSCAN algorithm, some methods are used to select several values of parameter Eps for different densities according to a k-dist plot. With different values of Eps, it is possible to find out clusters with varied densities simultaneity. For each value of Eps, DBSCAN algorithm is adopted in order to make sure that all the clusters with respect to corresponding density are clustered. And for the next process, the points that have been clustered are ignored, which avoids marking both denser areas and sparser ones as one cluster. Finally, a synthetic database with 2-dimension data is used for demonstration, and experiments show that VDBSCAN is efficient in successfully clustering uneven datasets.",
"title": ""
},
{
"docid": "1af5c5e20c1ce827f899dc70d0495bdc",
"text": "High power sources and high sensitivity detectors are highly in demand for terahertz imaging and sensing systems. Use of nano-antennas and nano-plasmonic light concentrators in photoconductive terahertz sources and detectors has proven to offer significantly higher terahertz radiation powers and detection sensitivities by enhancing photoconductor quantum efficiency while maintaining its ultrafast operation. This is because of the unique capability of nano-antennas and nano-plasmonic structures in manipulating the concentration of photo-generated carriers within the device active area, allowing a larger number of photocarriers to efficiently contribute to terahertz radiation and detection. An overview of some of the recent advancements in terahertz optoelectronic devices through use of various types of nano-antennas and nano-plasmonic light concentrators is presented in this article.",
"title": ""
},
{
"docid": "7b83005861e8c0cfe7a13736e9a75ab6",
"text": "This thesis presents a study into the nature and structure of academic lectures, with a special focus on metadiscourse phenomena. Metadiscourse refers to a set of linguistics expressions that signal specific discourse functions such as the Introduction: “Today we will talk about...” and Emphasising: “This is an important point”. These functions are important because they are part of lecturers’ strategies in understanding of what happens in a lecture. The knowledge of their presence and identity could serve as initial steps toward downstream applications that will require functional analysis of lecture content such as a browser for lectures archives, summarisation, or an automatic minute-taker for lectures. One challenging aspect for metadiscourse detection and classification is that the set of expressions are semifixed, meaning that different phrases can indicate the same function. To that end a four-stage approach is developed to study metadiscourse in academic lectures. Firstly, a corpus of metadiscourse for academic lectures from Physics and Economics courses is built by adapting an existing scheme that describes functional-oriented metadiscourse categories. Second, because producing reference transcripts is a time-consuming task and prone to some errors due to the manual efforts required, an automatic speech recognition (ASR) system is built specifically to produce transcripts of lectures. Since the reference transcripts lack time-stamp information, an alignment system is applied to the reference to be able to evaluate the ASR system. Then, a model is developed using Support Vector Machines (SVMs) to classify metadiscourse tags using both textual and acoustical features. The results show that n-grams are the most inductive features for the task; however, due to data sparsity the model does not generalise for unseen n-grams. This limits its ability to solve the variation issue in metadiscourse expressions. Continuous Bag-of-Words (CBOW) provide a promising solution as this can capture both the syntactic and semantic similarities between words and thus is able to solve the generalisation issue. However, CBOW ignores the word order completely, something which is very important to be retained when classifying metadiscourse tags. The final stage aims to address the issue of sequence modelling by developing a joint CBOW and Convolutional Neural Network (CNN) model. CNNs can work with continuous features such as word embedding in an elegant and robust fashion by producing a fixedsize feature vector that is able to identify indicative local information for the tagging task. The results show that metadiscourse tagging using CNNs outperforms the SVMs model significantly even on ASR outputs, owing to its ability to predict a sequence of words that is more representative for the task regardless of its position in the sentence. In addition, the inclusion of other features such as part-of-speech (POS) tags and prosodic cues improved the results further. These findings are consistent in both disciplines. The final contribution in this thesis is to investigate the suitability of using metadiscourse tags as discourse features in the lecture structure segmentation model, despite the fact that the task is approached as a classification model and most of the state-of-art models are unsupervised. In general, the obtained results show remarkable improvements over the state-of-the-art models in both disciplines.",
"title": ""
},
{
"docid": "8f3c275ac076489747ad329edf1d8757",
"text": "Wilt is an important disease of banana causing significant reduction in yield. In presentstudy, the pathogenic fungus was isolated frompseudo stem of infected plants of banana.The in vitro efficacy of different plant extracts viz.,Azardiachta indica, Artemessia annua, Eucalyptus globulus, and Ocimum sanctum were tested to managepanama wilt of banana. Different concentrations 5, 10, 15 and 20% of plant extracts were used in the study. All the plant extracts showed significant reduction in the growth ofpathogen. Among the different extracts 20% of Azardiachta indica was found most effective followed by Eucalyptus globulus, Artemessia annua and Ocimum sanctum.",
"title": ""
},
{
"docid": "eecc4c73eb7f784b7f03923f14d50224",
"text": "Gated-Attention (GA) Reader has been effective for reading comprehension. GA Reader makes two assumptions: (1) a uni-directional attention that uses an input query to gate token encodings of a document; (2) encoding at the cloze position of an input query is considered for answer prediction. In this paper, we propose Collaborative Gating (CG) and Self-Belief Aggregation (SBA) to address the above assumptions respectively. In CG, we first use an input document to gate token encodings of an input query so that the influence of irrelevant query tokens may be reduced. Then the filtered query is used to gate token encodings of an document in a collaborative fashion. In SBA, we conjecture that query tokens other than the cloze token may be informative for answer prediction. We apply self-attention to link the cloze token with other tokens in a query so that the importance of query tokens with respect to the cloze position are weighted. Then their evidences are weighted, propagated and aggregated for better reading comprehension. Experiments show that our approaches advance the state-of-theart results in CNN, Daily Mail, and Who Did What public test sets.",
"title": ""
},
{
"docid": "2cbb2af6ed4ef193aad77c2f696a45c5",
"text": "Consider mutli-goal tasks that involve static environments and dynamic goals. Examples of such tasks, such as goaldirected navigation and pick-and-place in robotics, abound. Two types of Reinforcement Learning (RL) algorithms are used for such tasks: model-free or model-based. Each of these approaches has limitations. Model-free RL struggles to transfer learned information when the goal location changes, but achieves high asymptotic accuracy in single goal tasks. Model-based RL can transfer learned information to new goal locations by retaining the explicitly learned state-dynamics, but is limited by the fact that small errors in modelling these dynamics accumulate over long-term planning. In this work, we improve upon the limitations of model-free RL in multigoal domains. We do this by adapting the Floyd-Warshall algorithm for RL and call the adaptation Floyd-Warshall RL (FWRL). The proposed algorithm learns a goal-conditioned action-value function by constraining the value of the optimal path between any two states to be greater than or equal to the value of paths via intermediary states. Experimentally, we show that FWRL is more sample-efficient and learns higher reward strategies in multi-goal tasks as compared to Q-learning, model-based RL and other relevant baselines in a tabular domain.",
"title": ""
},
{
"docid": "8c218474ab97c4231c9eee9aa70bae39",
"text": "A widely studied non-deterministic polynomial time (NP) hard problem lies in nding a route between the two nodes of a graph. Oen meta-heuristics algorithms such asA∗ are employed on graphs with a large number of nodes. Here, we propose a deep recurrent neural network architecture based on the Sequence-2-Sequence (Seq2Seq) model, widely used, for instance in text translation. Particularly, we illustrate that utilising a context vector that has been learned from two dierent recurrent networks enables increased accuracies in learning the shortest route of a graph. Additionally, we show that one can boost the performance of the Seq2Seq network by smoothing the loss function using a homotopy continuation of the decoder’s loss function.",
"title": ""
},
{
"docid": "6724af38a637d61ccc2a4ad8119c6e1a",
"text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified",
"title": ""
},
{
"docid": "b68d92cd03d77ee383b8be50a00716f1",
"text": "This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.",
"title": ""
},
{
"docid": "6035e8dfcfa38df199c7daeda33cfc0c",
"text": "Recommender system based on web data mining is widely used in e-commerce for it generates more accurate and objective recommendation results and provides personalized service for web users. This paper makes analysis on some major recommendation methods based on web data mining such as Collaborative Filtering and Association Rules mining, and discusses the practical application of these methods in the tourism e-commerce, and then presents a design of web mining based tourism e-commerce recommender system with offline and online modules.",
"title": ""
},
{
"docid": "515cbc485480e094320f23d142bd3b84",
"text": "Development of Emotional Intelligence Training for Certified Registered Nurse Anesthetists by Rickey King MSNA, Gooding Institute of Nurse Anesthesia, 2006 BSN, Jacksonville University, 2003 ASN, Oklahoma State University, 1988 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice Walden University February 2016 Abstract The operating room is a high stress, high stakes, emotionally charged area with an interdisciplinary team that must work cohesively for the benefit of all. If an operating room staff does not understand those emotions, such a deficit can lead to decreased effective communication and an ineffectual response to problems. Emotional intelligence is a conceptual framework encompassing the ability to identify, assess, perceive, and manage emotions. The research question for this project is aimed at understanding how an educational intervention could help to improve the emotional intelligence of anesthetists and their ability to communicate with other operation room staff to produce effective problem solving. The purpose of this scholarly project was to design a 5-week evidence-based, educational intervention that will be implemented for 16 nurse anesthetists practicing in 3 rural hospitals in Southern Kentucky. The Emotional and Social Competency Inventory – University Edition will be offered to the nurse anesthetists prior to the educational intervention and 6 weeks post implementation to determine impact on the 12 core concepts of emotional intelligence which are categorized under self-awareness, social awareness, self-management, and relationship management. It is hoped that this project will improve emotional intelligence, which directly impacts interdisciplinary communication and produces effective problem solving and improved patient outcomes. The positive social change lies in the ability of the interdisciplinary participants to address stressful events benefitting patients, operating room personnel, and the anesthetist by decreasing negative outcomes and horizontal violence in the operating room.The operating room is a high stress, high stakes, emotionally charged area with an interdisciplinary team that must work cohesively for the benefit of all. If an operating room staff does not understand those emotions, such a deficit can lead to decreased effective communication and an ineffectual response to problems. Emotional intelligence is a conceptual framework encompassing the ability to identify, assess, perceive, and manage emotions. The research question for this project is aimed at understanding how an educational intervention could help to improve the emotional intelligence of anesthetists and their ability to communicate with other operation room staff to produce effective problem solving. The purpose of this scholarly project was to design a 5-week evidence-based, educational intervention that will be implemented for 16 nurse anesthetists practicing in 3 rural hospitals in Southern Kentucky. The Emotional and Social Competency Inventory – University Edition will be offered to the nurse anesthetists prior to the educational intervention and 6 weeks post implementation to determine impact on the 12 core concepts of emotional intelligence which are categorized under self-awareness, social awareness, self-management, and relationship management. It is hoped that this project will improve emotional intelligence, which directly impacts interdisciplinary communication and produces effective problem solving and improved patient outcomes. The positive social change lies in the ability of the interdisciplinary participants to address stressful events benefitting patients, operating room personnel, and the anesthetist by decreasing negative outcomes and horizontal violence in the operating room. Development of Emotional Intelligence Training for Certified Registered Nurse Anesthetists by Rickey King MSNA, Gooding Institute of Nurse Anesthesia, 2006 BSN, Jacksonville University, 2003 ASN, Oklahoma State University, 1988 Project Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Nursing Practice",
"title": ""
},
{
"docid": "6ff33e3012188bb90a324182847141ee",
"text": "When starting the software process improvement (SPI) journey, there are many SPI standards to select from. Selecting good SPI standards can be a technical problem, from a software engineering point of view, but it can also be a political problem, some standards fitting more with internal political agendas than others. As it is well-known that SPI without management commitment can have disastrous effects on SPI, so can also be the consequence of selecting standards that are technically unfit. The dilemma on how to select SPI standards provides a picture of SPI as a political game played out between managers, software engineers and SPI people. Starting with SPI from the viewpoint of control theory, the paper identifies different conflict situations within the control theory framework, and suggests using game theory and drama theory for finding optimal control strategies. Drama theory is further explored through a SPI case study that illustrates how SPI standards stabilize in spite of conflicts and social disaster. The contribution of the paper consists of introducing the concept of ‘evolutionary drama theory’ (derived from evolutionary game theory, EGT) as a tool for describing and analysing how an artefact like a SPI standard evolves towards equilibrium (evolutionary stable strategy, ESS) by looking at repeated dramas where equilibriums may not necessarily be found or, if found, may not necessarily fit with the ESS.",
"title": ""
},
{
"docid": "5c2f115e0159d15a87904e52879c1abf",
"text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.",
"title": ""
},
{
"docid": "6cf711826e5718507725ff6f887c7dbc",
"text": "Electronic Support Measures (ESM) system is an important function of electronic warfare which provides the real time projection of radar activities. Such systems may encounter with very high density pulse sequences and it is the main task of an ESM system to deinterleave these mixed pulse trains with high accuracy and minimum computation time. These systems heavily depend on time of arrival analysis and need efficient clustering algorithms to assist deinterleaving process in modern evolving environments. On the other hand, self organizing neural networks stand very promising for this type of radar pulse clustering. In this study, performances of self organizing neural networks that meet such clustering criteria are evaluated in detail and the results are presented.",
"title": ""
}
] |
scidocsrr
|
7a37d5a06686520063f899ab51cbab9c
|
EMMA: A New Platform to Evaluate Hardware-based Mobile Malware Analyses
|
[
{
"docid": "2f2801e502492a648a0758b6e33fe19d",
"text": "Intel is developing the Intel® Software Guard Extensions (Intel® SGX) technology, an extension to Intel® Architecture for generating protected software containers. The container is referred to as an enclave. Inside the enclave, software’s code, data, and stack are protected by hardware enforced access control policies that prevent attacks against the enclave’s content. In an era where software and services are deployed over the Internet, it is critical to be able to securely provision enclaves remotely, over the wire or air, to know with confidence that the secrets are protected and to be able to save secrets in non-volatile memory for future use. This paper describes the technology components that allow provisioning of secrets to an enclave. These components include a method to generate a hardware based attestation of the software running inside an enclave and a means for enclave software to seal secrets and export them outside of the enclave (for example store them in non-volatile memory) such that only the same enclave software would be able un-seal them back to their original form.",
"title": ""
}
] |
[
{
"docid": "9f5b61ad41dceff67ab328791ed64630",
"text": "In this paper we present a resource-adaptive framework for real-time vision-aided inertial navigation. Specifically, we focus on the problem of visual-inertial odometry (VIO), in which the objective is to track the motion of a mobile platform in an unknown environment. Our primary interest is navigation using miniature devices with limited computational resources, similar for example to a mobile phone. Our proposed estimation framework consists of two main components: (i) a hybrid EKF estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-based SLAM, and (ii) an adaptive image-processing module that adjusts the number of detected image features based oadaptive image-processing module that adjusts the number of detected image features based on the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework isn the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework is capable of real-time processing of image and inertial data on the processor of a mobile phone.",
"title": ""
},
{
"docid": "de64aaa37e53beacb832d3686b293a9b",
"text": "By using a population-based cohort of the general Dutch population, the authors studied whether an excessively negative orientation toward pain (pain catastrophizing) and fear of movement/(re)injury (kinesiophobia) are important in the etiology of chronic low back pain and associated disability, as clinical studies have suggested. A total of 1,845 of the 2,338 inhabitants (without severe disease) aged 25-64 years who participated in a 1998 population-based questionnaire survey on musculoskeletal pain were sent a second questionnaire after 6 months; 1,571 (85 percent) participated. For subjects with low back pain at baseline, a high level of pain catastrophizing predicted low back pain at follow-up (odds ratio (OR) = 1.7, 95% confidence interval (CI): 1.0, 2.8) and chronic low back pain (OR = 1.7, 95% CI: 1.0, 2.3), in particular severe low back pain (OR = 3.0, 95% CI: 1.7, 5.2) and low back pain with disability (OR = 3.0, 95% CI: 1.7, 5.4). A high level of kinesiophobia showed similar associations. The significant associations remained after adjustment for pain duration, pain severity, or disability at baseline. For those without low back pain at baseline, a high level of pain catastrophizing or kinesiophobia predicted low back pain with disability during follow-up. These cognitive and emotional factors should be considered when prevention programs are developed for chronic low back pain and related disability.",
"title": ""
},
{
"docid": "7d285ca842be3d85d218dd70f851194a",
"text": "CONTEXT\nThe Atkins diet books have sold more than 45 million copies over 40 years, and in the obesity epidemic this diet and accompanying Atkins food products are popular. The diet claims to be effective at producing weight loss despite ad-libitum consumption of fatty meat, butter, and other high-fat dairy products, restricting only the intake of carbohydrates to under 30 g a day. Low-carbohydrate diets have been regarded as fad diets, but recent research questions this view.\n\n\nSTARTING POINT\nA systematic review of low-carbohydrate diets found that the weight loss achieved is associated with the duration of the diet and restriction of energy intake, but not with restriction of carbohydrates. Two groups have reported longer-term randomised studies that compared instruction in the low-carbohydrate diet with a low-fat calorie-reduced diet in obese patients (N Engl J Med 2003; 348: 2082-90; Ann Intern Med 2004; 140: 778-85). Both trials showed better weight loss on the low-carbohydrate diet after 6 months, but no difference after 12 months. WHERE NEXT?: The apparent paradox that ad-libitum intake of high-fat foods produces weight loss might be due to severe restriction of carbohydrate depleting glycogen stores, leading to excretion of bound water, the ketogenic nature of the diet being appetite suppressing, the high protein-content being highly satiating and reducing spontaneous food intake, or limited food choices leading to decreased energy intake. Long-term studies are needed to measure changes in nutritional status and body composition during the low-carbohydrate diet, and to assess fasting and postprandial cardiovascular risk factors and adverse effects. Without that information, low-carbohydrate diets cannot be recommended.",
"title": ""
},
{
"docid": "fa91331ef31de20ae63cc6c8ab33f062",
"text": "Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.",
"title": ""
},
{
"docid": "de70b208289bad1bc410bcb7a76e56df",
"text": "Instant Messaging chat sessions are realtime text-based conversations which can be analyzed using dialogue-act models. We describe a statistical approach for modelling and detecting dialogue acts in Instant Messaging dialogue. This involved the collection of a small set of task-based dialogues and annotating them with a revised tag set. We then dealt with segmentation and synchronisation issues which do not arise in spoken dialogue. The model we developed combines naive Bayes and dialogue-act n-grams to obtain better than 80% accuracy in our tagging experiment.",
"title": ""
},
{
"docid": "001b3155f0d67fd153173648cd483ac2",
"text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.",
"title": ""
},
{
"docid": "13503c2cb633e162f094727df62092d3",
"text": "In this article, we investigate word sense distributions in noun compounds (NCs). Our primary goal is to disambiguate the word sense of component words in NCs, based on investigation of “semantic collocation” between them. We use sense collocation and lexical substitution to build supervised and unsupervised word sense disambiguation (WSD) classifiers, and show our unsupervised learner to be superior to a benchmark WSD system. Further, we develop a word sense-based approach to interpreting the semantic relations in NCs.",
"title": ""
},
{
"docid": "a278abfa0501077eb2f71cbb272689d6",
"text": "Among the many emerging non-volatile memory technologies, chalcogenide (i.e. GeSbTe/GST) based phase change random access memory (PRAM) has shown particular promise. While accurate simulations are required for reducing programming current and enabling higher integration density, many challenges remain for improved simulation of PRAM cell operation including nanoscale thermal conduction and phase change. This work simulates the fully coupled electrical and thermal transport and phase change in 2D PRAM geometries, with specific attention to the impact of thermal boundary resistance between the GST and surrounding materials. For GST layer thicknesses between 25 and 75nm, the interface resistance reduces the predicted programming current and power by 31% and 53%, respectively, for a typical reset transition. The calculations also show the large sensitivity of programming voltage to the GST thermal conductivity. These results show the importance of temperature-dependent thermal properties of materials and interfaces in PRAM cells",
"title": ""
},
{
"docid": "2e65ae613aa80aac27d5f8f6e00f5d71",
"text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215",
"title": ""
},
{
"docid": "2fe1ed0f57e073372e4145121e87d7c6",
"text": "Information visualization (InfoVis), the study of transforming data, information, and knowledge into interactive visual representations, is very important to users because it provides mental models of information. The boom in big data analytics has triggered broad use of InfoVis in a variety of domains, ranging from finance to sports to politics. In this paper, we present a comprehensive survey and key insights into this fast-rising area. The research on InfoVis is organized into a taxonomy that contains four main categories, namely empirical methodologies, user interactions, visualization frameworks, and applications, which are each described in terms of their major goals, fundamental principles, recent trends, and state-of-the-art approaches. At the conclusion of this survey, we identify existing technical challenges and propose directions for future research.",
"title": ""
},
{
"docid": "20cc5c4aa870918f123e78490d5a5a73",
"text": "The interest and demand for female genital rejuvenation surgery are steadily increasing. This report presents a concept of genital beautification consisting of labia minora reduction, labia majora augmentation by autologous fat transplantation, labial brightening by laser, mons pubis reduction by liposuction, and vaginal tightening if desired. Genital beautification was performed for 124 patients between May 2009 and January 2012 and followed up for 1 year to obtain data about satisfaction with the surgery. Of the 124 female patients included in the study, 118 (95.2 %) were happy and 4 (3.2 %) were very happy with their postoperative appearance. In terms of postoperative functionality, 84 patients (67.7 %) were happy and 40 (32.3 %) were very happy. Only 2 patients (1.6 %) were not satisfied with the aesthetic result of their genital beautification procedures, and 10 patients (8.1 %) experienced wound dehiscence. The described technique of genital beautification combines different aesthetic female genital surgery techniques. Like other aesthetic surgeries, these procedures are designed for the subjective improvement of the appearance and feelings of the patients. The effects of the operation are functional and psychological. They offer the opportunity for sexual stimulation and satisfaction. The complication rate is low. Superior aesthetic results and patient satisfaction can be achieved by applying this technique. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "6be67fd8fb351d779c355762e188809b",
"text": "Analysis and examination of data is performed in digital forensics. Nowadays computer is the major source of communication which can also be used by the investigators to gain forensically relevant information. Forensic analysis can be done in static and live modes. Traditional approach provides incomplete evidentiary data, while live analysis tools can provide the investigators a more accurate and consistent picture of the current and previously running processes. Many important system related information present in volatile memory cannot be effectively recovered by using static analysis techniques. In this paper, we present a critical review of static and live analysis approaches and we evaluate the reliability of different tools and techniques used in static and live digital forensic analysis.",
"title": ""
},
{
"docid": "3477975d58a4b30a636108e1c11f5e61",
"text": "In this paper, an output feedback nonlinear control is proposed for a hydraulic system with mismatched modeling uncertainties in which an extended state observer (ESO) and a nonlinear robust controller are synthesized via the backstepping method. The ESO is designed to estimate not only the unmeasured system states but also the modeling uncertainties. The nonlinear robust controller is designed to stabilize the closed-loop system. The proposed controller accounts for not only the nonlinearities (e.g., nonlinear flow features of servovalve), but also the modeling uncertainties (e.g., parameter derivations and unmodeled dynamics). Furthermore, the controller theoretically guarantees a prescribed tracking transient performance and final tracking accuracy, while achieving asymptotic tracking performance in the absence of time-varying uncertainties, which is very important for high-accuracy tracking control of hydraulic servo systems. Extensive comparative experimental results are obtained to verify the high-performance nature of the proposed control strategy.",
"title": ""
},
{
"docid": "03dc2c32044a41715991d900bb7ec783",
"text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.",
"title": ""
},
{
"docid": "768ed187f94163727afd011817a306c6",
"text": "Although interest regarding the role of dispositional affect in job behaviors has surged in recent years, the true magnitude of affectivity's influence remains unknown. To address this issue, the authors conducted a qualitative and quantitative review of the relationships between positive and negative affectivity (PA and NA, respectively) and various performance dimensions. A series of meta-analyses based on 57 primary studies indicated that PA and NA predicted task performance in the hypothesized directions and that the relationships were strongest for subjectively rated versus objectively rated performance. In addition, PA was related to organizational citizenship behaviors but not withdrawal behaviors, and NA was related to organizational citizenship behaviors, withdrawal behaviors, counterproductive work behaviors, and occupational injury. Mediational analyses revealed that affect operated through different mechanisms in influencing the various performance dimensions. Regression analyses documented that PA and NA uniquely predicted task performance but that extraversion and neuroticism did not, when the four were considered simultaneously. Discussion focuses on the theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "b80df19e67d2bbaabf4da18d7b5af4e2",
"text": "This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.",
"title": ""
},
{
"docid": "8966f87b2441cc2c348e25e3503e766c",
"text": "Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.",
"title": ""
},
{
"docid": "ab8af5f48be6b0b7769b8875e528be84",
"text": "A feedback vertex set of a graph is a subset of vertices that contains at least one vertex from every cycle in the graph. The problem considered is that of finding a minimum feedback vertex set given a weighted and undirected graph. We present a simple and efficient approximation algorithm with performance ratio of at most 2, improving previous best bounds for either weighted or unweighted cases of the problem. Any further improvement on this bound, matching the best constant factor known for the vertex cover problem, is deemed challenging. The approximation principle, underlying the algorithm, is based on a generalized form of the classical local ratio theorem, originally developed for approximation of the vertex cover problem, and a more flexible style of its application.",
"title": ""
},
{
"docid": "e2de8284e14cb3abbd6e3fbcfb5bc091",
"text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.",
"title": ""
},
{
"docid": "ac5e7e88d965aa695b8ae169edce2426",
"text": "Randomness test suites constitute an essential component within the process of assessing random number generators in view of determining their suitability for a specific application. Evaluating the randomness quality of random numbers sequences produced by a given generator is not an easy task considering that no finite set of statistical tests can assure perfect randomness, instead each test attempts to rule out sequences that show deviation from perfect randomness by means of certain statistical properties. This is the reason why several batteries of statistical tests are applied to increase the confidence in the selected generator. Therefore, in the present context of constantly increasing volumes of random data that need to be tested, special importance has to be given to the performance of the statistical test suites. Our work enrolls in this direction and this paper presents the results on improving the well known NIST Statistical Test Suite (STS) by introducing parallelism and a paradigm shift towards byte processing delivering a design that is more suitable for today's multicore architectures. Experimental results show a very significant speedup of up to 103 times compared to the original version.",
"title": ""
}
] |
scidocsrr
|
dccdf5cb70bfa68ed24161044a913941
|
Automatic Keyphrase Extraction via Topic Decomposition
|
[
{
"docid": "1714f89263c0c455d3c8ae1a358de9ee",
"text": "In this paper, we introduce and compare between two novel approaches, supervised and unsupervised, for identifying the keywords to be used in extractive summarization of text documents. Both our approaches are based on the graph-based syntactic representation of text and web documents, which enhances the traditional vector-space model by taking into account some structural document features. In the supervised approach, we train classification algorithms on a summarized collection of documents with the purpose of inducing a keyword identification model. In the unsupervised approach, we run the HITS algorithm on document graphs under the assumption that the top-ranked nodes should represent the document keywords. Our experiments on a collection of benchmark summaries show that given a set of summarized training documents, the supervised classification provides the highest keyword identification accuracy, while the highest F-measure is reached with a simple degree-based ranking. In addition, it is sufficient to perform only the first iteration of HITS rather than running it to its convergence.",
"title": ""
},
{
"docid": "1af7a41e5cac72ed9245b435c463b366",
"text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.",
"title": ""
}
] |
[
{
"docid": "5e435e0bd1ebdd1f86b57e40fc047366",
"text": "Deep clustering is a recently introduced deep learning architecture that uses discriminatively trained embeddings as the basis for clustering. It was recently applied to spectrogram segmentation, resulting in impressive results on speaker-independent multi-speaker separation. In this paper we extend the baseline system with an end-to-end signal approximation objective that greatly improves performance on a challenging speech separation. We first significantly improve upon the baseline system performance by incorporating better regularization, larger temporal context, and a deeper architecture, culminating in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB compared to the baseline of 6.0 dB for two-speaker separation, as well as a 7.1 dB SDR improvement for three-speaker separation. We then extend the model to incorporate an enhancement layer to refine the signal estimates, and perform end-to-end training through both the clustering and enhancement stages to maximize signal fidelity. We evaluate the results using automatic speech recognition. The new signal approximation objective, combined with end-to-end training, produces unprecedented performance, reducing the word error rate (WER) from 89.1% down to 30.8%. This represents a major advancement towards solving the cocktail party problem.",
"title": ""
},
{
"docid": "6d096dc86d240370bef7cc4e4cdd12e5",
"text": "Modern software systems are subject to uncertainties, such as dynamics in the availability of resources or changes of system goals. Self-adaptation enables a system to reason about runtime models to adapt itself and realises its goals under uncertainties. Our focus is on providing guarantees for adaption goals. A prominent approach to provide such guarantees is automated verification of a stochastic model that encodes up-to-date knowledge of the system and relevant qualities. The verification results allow selecting an adaption option that satisfies the goals. There are two issues with this state of the art approach: i) changing goals at runtime (a challenging type of uncertainty) is difficult, and ii) exhaustive verification suffers from the state space explosion problem. In this paper, we propose a novel modular approach for decision making in self-adaptive systems that combines distinct models for each relevant quality with runtime simulation of the models. Distinct models support on the fly changes of goals. Simulation enables efficient decision making to select an adaptation option that satisfies the system goals. The tradeoff is that simulation results can only provide guarantees with a certain level of accuracy. We demonstrate the benefits and tradeoffs of the approach for a service-based telecare system.",
"title": ""
},
{
"docid": "5f528e90763ef96cd812f2b9c2c42de6",
"text": "Many blind motion deblur methods model the motion blur as a spatially invariant convolution process. However, motion blur caused by the camera movement in 3D space during shutter time often leads to spatially varying blurring effect over the image. In this paper, we proposed an efficient two-stage approach to remove spatially-varying motion blurring from a single photo. There are three main components in our approach: (i) a minimization method of estimating region-wise blur kernels by using both image information and correlations among neighboring kernels, (ii) an interpolation scheme of constructing pixel-wise blur matrix from region-wise blur kernels, and (iii) a non-blind deblurring method robust to kernel errors. The experiments showed that the proposed method outperformed the existing software based approaches on tested real images.",
"title": ""
},
{
"docid": "5f2c53865316c1eb47fc734f53e10b00",
"text": "In recent years we have witnessed a proliferation of data structure and algorithm proposals for efficient deep packet inspection on memory based architectures. In parallel, we have observed an increasing interest in network processors as target architectures for high performance networking applications.\n In this paper we explore design alternatives in the implementation of regular expression matching architectures on network processors (NPs) and general purpose processors (GPPs). Specifically, we present a performance evaluation on an Intel IXP2800 NP, on an Intel Xeon GPP and on a multiprocessor system consisting of four AMD Opteron 850 cores. Our study shows how to exploit the Intel IXP2800 architectural features in order to maximize system throughput, identifies and evaluates algorithmic and architectural trade-offs and limitations, and highlights how the presence of caches affects the overall performances. We provide an implementation of our NP designs within the Open Network Laboratory (http://www.onl.wustl.edu).",
"title": ""
},
{
"docid": "869e01855c8cfb9dc3e64f7f3e73cd60",
"text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.",
"title": ""
},
{
"docid": "7c525afc11c41e0a8ca6e8c48bdec97c",
"text": "AT commands, originally designed in the early 80s for controlling modems, are still in use in most modern smartphones to support telephony functions. The role of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of their functionality is unclear and poorly documented. In this paper, we systematically retrieve and extract 3,500 AT commands from over 2,000 Android smartphone firmware images across 11 vendors. We methodically test our corpus of AT commands against eight Android devices from four different vendors through their USB interface and characterize the powerful functionality exposed, including the ability to rewrite device firmware, bypass Android security mechanisms, exfiltrate sensitive device information, perform screen unlocks, and inject touch events solely through the use of AT commands. We demonstrate that the AT command interface contains an alarming amount of unconstrained functionality and represents a broad attack surface on Android devices.",
"title": ""
},
{
"docid": "3e850a45249f45e95d1a7413e7b142f1",
"text": "In our increasingly “data-abundant” society, remote sensing big data perform massive, high dimension and heterogeneity features, which could result in “dimension disaster” to various extent. It is worth mentioning that the past two decades have witnessed a number of dimensional reductions to weak the spatiotemporal redundancy and simplify the calculation in remote sensing information extraction, such as the linear learning methods or the manifold learning methods. However, the “crowding” and mixing when reducing dimensions of remote sensing categories could degrade the performance of existing techniques. Then in this paper, by analyzing probability distribution of pairwise distances among remote sensing datapoints, we use the 2-mixed Gaussian model(GMM) to improve the effectiveness of the theory of t-Distributed Stochastic Neighbor Embedding (t-SNE). A basic reducing dimensional model is given to test our proposed methods. The experiments show that the new probability distribution capable retains the local structure and significantly reveals differences between categories in a global structure.",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "ecabfcbb40fc59f1d1daa02502164b12",
"text": "We present a generalized line histogram technique to compute global rib-orientation for detecting rotated lungs in chest radiographs. We use linear structuring elements, such as line seed filters, as kernels to convolve with edge images, and extract a set of lines from the posterior rib-cage. After convolving kernels in all possible orientations in the range [0, π], we measure the angle for which the line histogram has maximum magnitude. This measure provides a good approximation of the global chest rib-orientation for each lung. A chest radiograph is said to be upright if the difference between the orientation angles of both lungs with respect to the horizontal axis, is negligible. We validate our method on sets of normal and abnormal images and argue that rib orientation can be used for rotation detection in chest radiographs as aid in quality control during image acquisition, and to discard images from training and testing data sets. In our test, we achieve a maximum accuracy of 90%.",
"title": ""
},
{
"docid": "ab57df7702fa8589f7d462c80d9a2598",
"text": "The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.",
"title": ""
},
{
"docid": "dd270ffa800d633a7a354180eb3d426c",
"text": "I have taken an experimental approach to this question. Freely voluntary acts are pre ceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites.",
"title": ""
},
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "afac9140d183eac56785b26069953342",
"text": "Big Data means extremely huge large data sets that can be analyzed to find patterns, trends. One technique that can be used for data analysis so that able to help us find abstract patterns in Big Data is Deep Learning. If we apply Deep Learning to Big Data, we can find unknown and useful patterns that were impossible so far. With the help of Deep Learning, AI is getting smart. There is a hypothesis in this regard, the more data, the more abstract knowledge. So a handy survey of Big Data, Deep Learning and its application in Big Data is necessary. In this paper, we provide a comprehensive survey on what is Big Data, comparing methods, its research problems, and trends. Then a survey of Deep Learning, its methods, comparison of frameworks, and algorithms is presented. And at last, application of Deep Learning in Big Data, its challenges, open research problems and future trends are presented.",
"title": ""
},
{
"docid": "cc93f5a421ad0e5510d027b01582e5ae",
"text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.",
"title": ""
},
{
"docid": "c526e32c9c8b62877cb86bc5b097e2cf",
"text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.",
"title": ""
},
{
"docid": "129a85f7e611459cf98dc7635b44fc56",
"text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.",
"title": ""
},
{
"docid": "0b25f2989e18d04a9262b4a6fe107c9f",
"text": "Delay Tolerant Networking has been a hot topic of interest in networking since the start of the century, and has sparked a significant amount of research in the area, particularly in an age where the ultimate goal is to provide ubiquitous connectivity, even in regions previously considered inaccessible. Protocols and applications in popular use on the Internet are not readily applicable to such networks, that are characterized by long delays and inconsistent connectivity. In this paper, we summarize the wealth of literature in this field in the form of a concise, but comprehensive tutorial. The paper is designed to bring researchers new to the field with a general picture of the state of the art in this area, and motivate them to begin exploring problems in the field quickly.",
"title": ""
},
{
"docid": "3d390bed1ca485abd79073add7e781ba",
"text": "Predicting the future to anticipate the outcome of events and actions is a critical attribute of autonomous agents; particularly for agents which must rely heavily on real time visual data for decision making. Working towards this capability, we address the task of predicting future frame segmentation from a stream of monocular video by leveraging the 3D structure of the scene. Our framework is based on learnable sub-modules capable of predicting pixel-wise scene semantic labels, depth, and camera ego-motion of adjacent frames. We further propose a recurrent neural network based model capable of predicting future ego-motion trajectory as a function of a series of past ego-motion steps. Ultimately, we observe that leveraging 3D structure in the model facilitates successful prediction, achieving state of the art accuracy in future semantic segmentation.",
"title": ""
},
{
"docid": "adeebdc680819ca992f9d53e4866122a",
"text": "Large numbers of black kites (Milvus migrans govinda) forage with house crows (Corvus splendens) at garbage dumps in many Indian cities. Such aggregation of many individuals results in aggressiveness where adoption of a suitable behavioral approach is crucial. We studied foraging behavior of black kites in dumping sites adjoining two major corporation markets of Kolkata, India. Black kites used four different foraging tactics which varied and significantly influenced foraging attempts and their success rates. Kleptoparasitism was significantly higher than autonomous foraging events; interspecific kleptoparasitism was highest in occurrence with a low success rate, while ‘autonomous-ground’ was least adopted but had the highest success rate.",
"title": ""
},
{
"docid": "427970a79aa36ec6b1c9db08d093c6d0",
"text": "Recommendation system provides the facility to understand a person's taste and find new, desirable content for them automatically based on the pattern between their likes and rating of different items. In this paper, we have proposed a recommendation system for the large amount of data available on the web in the form of ratings, reviews, opinions, complaints, remarks, feedback, and comments about any item (product, event, individual and services) using Hadoop Framework. We have implemented Mahout Interfaces for analyzing the data provided by review and rating site for movies.",
"title": ""
}
] |
scidocsrr
|
8c29f58c78c6307c1bf90ae2164ffa8c
|
CANaLI : A System for Answering Controlled Natural Language Questions on RDF Knowledge Bases UCLA CSD Technical Report Number : 160004
|
[
{
"docid": "ed189b8fa606cc2d86706d199dd71a89",
"text": "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7%. PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75%. The PATTY resource is freely available for interactive access and download.",
"title": ""
},
{
"docid": "792766143996c6f5b3dc563c46e70358",
"text": "The third instalment of the open challenge on Question Answering over Linked Data (QALD-3) has been conducted as a half-day lab at CLEF 2013. Differently from previous editions of the challenge, QALD-3 put a strong emphasis on multilinguality, offering two tasks: one on multilingual question answering and one on ontology lexicalization. While no submissions were received for the latter, the former attracted six teams who submitted their systems’ results on the provided datasets. This paper provides an overview of QALD-3, discussing the approaches experimented by the participating systems as well as the obtained results.",
"title": ""
}
] |
[
{
"docid": "715d5bc3c7a9b4ff9008c609bb79100c",
"text": "A new direct method for calculating the electrostatic force of electroadhesive robots generated by interdigital electrodes is presented. Here, series expansion is employed to express the spatial potential, and point matching method is used in dealing with some boundary conditions. The attraction force is calculated using the Maxwell stress tensor formula. The accuracy of this method is verified through comparing our results with that of simulation work as well as reported experimental data, the agreement is found to be very good.",
"title": ""
},
{
"docid": "e4405c71336ea13ccbd43aa84651dc60",
"text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.",
"title": ""
},
{
"docid": "7987fb60db4da3aaea64d6382c8d62bd",
"text": "This essay gives a brief study of Domestication and Foreignization and the disputes over these two basic translation strategies which provide both linguistic and cultural guidance. Domestication designates the type of translation in which a transparent, fluent style is adopted to minimize the strangeness of the foreign text for target language readers; while foreignization means a target text is produced which deliberately breaks target conventions by retaining something of the foreignness of the original. In the contemporary international translation field, Eugene Nida is regarded as the representative of those who favour domesticating translation, whereas the Italian scholar Lawrence Venuti is regarded to be the spokesman for those who favour foreignizing translation, who has also led the debate to a white-hot state.",
"title": ""
},
{
"docid": "143da39941ecc8fb69e87d611503b9c0",
"text": "A dual-core 64b Xeonreg MP processor is implemented in a 65nm 8M process. The 435mm2 die has 1.328B transistors. Each core has two threads and a unified 1MB L2 cache. The 16MB unified, 16-way set-associative L3 cache implements both sleep and shut-off leakage reduction modes",
"title": ""
},
{
"docid": "2f62286e593b716ab1dad2bba066d813",
"text": "Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.",
"title": ""
},
{
"docid": "42050d2d11a30e003b9d35fad12daa5e",
"text": "Document is unavailable: This DOI was registered to an article that was not presented by the author(s) at this conference. As per section 8.2.1.B.13 of IEEE's \"Publication Services and Products Board Operations Manual,\" IEEE has chosen to exclude this article from distribution. We regret any inconvenience.",
"title": ""
},
{
"docid": "04013595912b4176574fb81b38beade5",
"text": "This chapter presents an overview of the current state of cognitive task analysis (CTA) in research and practice. CTA uses a variety of interview and observation strategies to capture a description of the explicit and implicit knowledge that experts use to perform complex tasks. The captured knowledge is most often transferred to training or the development of expert systems. The first section presents descriptions of a variety of CTA techniques, their common characteristics, and the typical strategies used to elicit knowledge from experts and other sources. The second section describes research on the impact of CTA and synthesizes a number of studies and reviews pertinent to issues underlying knowledge elicitation. In the third section, we discuss the integration of CTA with training design. Finally, in the fourth section, we present a number of recommendations for future research and conclude with general comments.",
"title": ""
},
{
"docid": "a6c39c728d2338e8eb6bc7b255952cea",
"text": "Clustering methods need to be robust if they are to be useful in practice. In this paper, we analyze several popular robust clustering methods and show that they have much in common. We also establish a connection between fuzzy set theory and robust statistics and point out the similarities between robust clustering methods and statistical methods such as the weighted least-squares (LS) technique, the M estimator, the minimum volume ellipsoid (MVE) algorithm, cooperative robust estimation (CRE), minimization of probability of randomness (MINPRAN), and the epsilon contamination model. By gleaning the common principles upon which the methods proposed in the literature are based, we arrive at a unified view of robust clustering methods. We define several general concepts that are useful in robust clustering, state the robust clustering problem in terms of the defined concepts, and propose generic algorithms and guidelines for clustering noisy data. We also discuss why the generalized Hough transform is a suboptimal solution to the robust clustering problem.",
"title": ""
},
{
"docid": "6723049ea783b15426dc5335872e4f75",
"text": "A method of using magnetic torque rods to do 3axis spacecraft attitude control has been developed. The goal of this system is to achieve a nadir pointing accuracy on the order of 0.1 to 1.0 deg without the need for thrusters or wheels. The open-loop system is under-actuated because magnetic torque rods cannot torque about the local magnetic field direction. This direction moves in space as the spacecraft moves along an inclined orbit, and the resulting system is roughly periodic. Periodic controllers are designed using an asymptotic linear quadratic regulator technique. The control laws include integral action and saturation logic. This system's performance has been studied via analysis and simulation. The resulting closed-loop systems are robust with respect to parametric modeling uncertainty. They converge from initial attitude errors of 30 deg per axis, and they achieve steady-state pointing errors on the order of 0.5 to 1.0 deg in the presence of drag torques and unmodeled residual dipole moments. Introduction All spacecraft have an attitude stabilization system. They range from passive spin-stabilized 1 or gravitygradient stabilized 2 systems to fully active three-axis controlled systems . Pointing accuracies for such systems may range from 10 deg down to 10 deg or better, depending on the spacecraft design and on the types of sensors and actuators that it carries. The most accurate designs normally include momentum wheels or reaction wheels. This paper develops an active 3-axis attitude stabilization system for a nadir-pointing spacecraft. It uses only magnetic torque rods as actuators. Additional components of the system include appropriate attitude sensors and a magnetometer. The goal of this system is to achieve pointing accuracy that is better than a gravity gradient stabilization system, on the order of 0.1 to 1 deg. Such a system will weigh less than either a gravity-gradient system or a wheelbased system, and it will use less power than a wheel∗ Associate Professor, Sibley School of Mech. & Aero. Engr. Associate Fellow, AIAA. Copyright 2000 by Mark L. Psiaki. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. based system. Thus, it will be ideal for small satellite applications, where weight and power budgets are severely restricted. There are two classic uses of magnetic torque rods in attitude control. One is for momentum management of wheel-based systems . The other is for angularmomentum and nutation control of spinning , momentum-biased , and dual-spin spacecraft . The present study is one of a growing number that consider active 3-axis magnetic attitude stabilization of a nadir-pointing spacecraft . Reference 5 also should be classified with this group because it uses similar techniques. Reference 7, the earliest such study, presents a 3-axis proportional-derivative control law. It computes a desired torque and projects it perpendicular to the Earth's magnetic field in order to determine the actual torque. Projection is necessary because the magnetic torque, nm, takes the form nm = m × b (1) where m is the magnetic dipole moment vector of the torque rods and b is the Earth's magnetic field. Equation (1) highlights the principal problem of magnetic-torque-based 3-axis attitude control: the system is under-actuated. A rigid spacecraft has 3 rotational degrees of freedom, but the torque rods can only torque about the 2 axes that are perpendicular to the magnetic field vector. The system is controllable if the orbit is inclined because the Earth's magnetic field vector rotates in space as the spacecraft moves around its orbit. It is a time-varying system that is approximately periodic. This system's under-actuation and its periodicity combine to create a challenging feedback controller design problem. The present problem is different from the problem of attitude control when thrusters or reaction wheels provide torque only about 2 axes. References 15 and 16 and others have addressed this alternate problem, in which the un-actuated direction is defined in spacecraft coordinates. For magnetic torques, the un-actuated direction does not rotate with the spacecraft. Various control laws have been considered for magnetic attitude control systems. Some of the controllers are similar to the original controller of Martel et al. . Time-varying Linear Quadratic Regulator (LQR) formulations have been tried , as has fuzzy control 9 and sliding-mode control . References 9 and 13 patch together solutions of time-",
"title": ""
},
{
"docid": "0f3cfd3df3022b1afca97f6517e42c58",
"text": "BACKGROUND\nSegmental pigmentation disorder (SegPD) is a rare type of cutaneous dyspigmentation. This hereditary disorder, first described some 20 years ago, is characterized by hypo and hyperpigmented patches on the trunk, extremities and less likely on the face and neck. These lesions are considered as a type of checkerboard pattern.\n\n\nCASE PRESENTATION\nHerein, we present a 26-year-old male who presented with hyperpigmented patches on his trunk, neck and upper extremities. Considering the clinical and histopathological findings, the diagnosis of SegPD was confirmed.\n\n\nCONCLUSION\nSegPD is a somewhat neglected entity which should be considered in differential diagnosis of pigmentation disorders.",
"title": ""
},
{
"docid": "baa4ba011558078ac1cf4c5648545b6b",
"text": "The now ubiquitous Android platform lacks security features that are considered to be necessary given how easily an application can be uploaded on markets by third-party developers and distributed to a large set of devices. Fortunately, static analysis can help developers, markets and users improve the quality and security of applications at a reasonable cost by being automated. While most existing analyses target specific security properties, we take a step back to build better foundations for the analysis of Android applications. We describe a model and give semantics for a significant part of the system by studying what obstacles existing analyses have faced. We then adapt a classical analysis, known as points-to analysis, to applications. This leads us to design and implement a new form of context-sensitivity for Android, paving the way for further experimentation and more specific security analyses.",
"title": ""
},
{
"docid": "065e6db1710715ce5637203f1749e6f6",
"text": "Software fault isolation (SFI) is an effective mechanism to confine untrusted modules inside isolated domains to protect their host applications. Since its debut, researchers have proposed different SFI systems for many purposes such as safe execution of untrusted native browser plugins. However, most of these systems focus on the x86 architecture. Inrecent years, ARM has become the dominant architecture for mobile devices and gains in popularity in data centers.Hence there is a compellingneed for an efficient SFI system for the ARM architecture. Unfortunately, existing systems either have prohibitively high performance overhead or place various limitations on the memory layout and instructions of untrusted modules.\n In this paper, we propose ARMlock, a hardware-based fault isolation for ARM. It uniquely leverages the memory domain support in ARM processors to create multiple sandboxes. Memory accesses by the untrusted module (including read, write, and execution) are strictly confined by the hardware,and instructions running inside the sandbox execute at the same speed as those outside it. ARMlock imposes virtually no structural constraints on untrusted modules. For example, they can use self-modifying code, receive exceptions, and make system calls. Moreover, system calls can be interposed by ARMlock to enforce the policies set by the host. We have implemented a prototype of ARMlock for Linux that supports the popular ARMv6 and ARMv7 sub-architecture. Our security assessment and performance measurement show that ARMlock is practical, effective, and efficient.",
"title": ""
},
{
"docid": "400d5442a02f81410429d9023bd01d79",
"text": "In this paper we present a question answering system using a neural network to interpret questions learned from the DBpedia repository. We train a sequenceto-sequence neural network model with n-triples extracted from the DBpedia Infobox Properties. Since these properties do not represent the natural language, we further used question-answer dialogues from movie subtitles. Although the automatic evaluation shows a low overlap of the generated answers compared to the gold standard set, a manual inspection of the showed promising outcomes from the experiment for further work.",
"title": ""
},
{
"docid": "c6725a67f1fa2b091e0bbf980e6260be",
"text": "This paper examines job satisfaction and employees’ turnover intentions in Total Nigeria PLC in Lagos State. The paper highlights and defines basic concepts of job satisfaction and employees’ turnover intention. It specifically considered satisfaction with pay, nature of work and supervision as the three facets of job satisfaction that affect employee turnover intention. To achieve this objective, authors adopted a survey method by administration of questionnaires, conducting interview and by reviewing archival documents as well as review of relevant journals and textbooks in this field of learning as means of data collection. Four (4) major hypotheses were derived from literature and respective null hypotheses tested at .05 level of significance It was found that specifically job satisfaction reduces employees’ turnover intention and that Total Nigeria PLC adopts standard pay structure, conducive nature of work and efficient supervision not only as strategies to reduce employees’ turnover but also as the company retention strategy.",
"title": ""
},
{
"docid": "2cbd6b3d19d0cf843a9e18f5b23872d2",
"text": "The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19% relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code1 and the detection results for person used will be publicly available for further research.",
"title": ""
},
{
"docid": "f1ab979a80ffed5ac002ad13d9a0c2ea",
"text": "Interleaved multiphase synchronous buck converters are often used to power computer CPU, GPU, and memory to meet the demands for increasing load current and fast current slew rate of the processors. This paper reports and explains undesired coupling between discrete inductors in typical commercial multiphase applications where space is limited. In this paper, equations of coupling coefficient are derived for single-turn and multiturn Ferrite core inductors commonly used in multiphase converters and are verified by Maxwell static simulation. The influence of the coupling effect on inductor current waveforms is demonstrated by Maxwell transient simulation and confirmed by experiments. The analysis provides a useful tool for mitigating the coupling effect in multiphase converters to avoid early inductor saturation and interference between phases. Design guidelines and recommendations are provided to minimize the unwanted coupling effect in multiphase converters.",
"title": ""
},
{
"docid": "be69aad411258c904ef359a721333015",
"text": "Electromyography (EMG) is the subject which deals with the detection, analysis and utilization of electrical signals emanating from skeletal muscles. The field of electromyography is studied in Biomedical Engineering. And prosthesis using electromyography is achieved under Biomechatronics [1]. The electric signal produced during muscle activation, known as the myoelectric signal, is produced from small electrical currents generated by the exchange of ions across the muscle membranes and detected with the help of electrodes. Electromyography is used to evaluate and record the electrical activity produced by muscles of a human body. The instrument from which we obtain the EMG signal is known as electromyograph and the resultant record obtained is known as electromyogram [2].",
"title": ""
},
{
"docid": "ce6d7185031f1b205181298909e8a020",
"text": "BACKGROUND\nMost preschoolers with viral wheezing exacerbations are not atopic.\n\n\nAIM\nTo test in a prospective controlled trial whether wheezing preschoolers presenting to the ED are different from the above in three different domains defining asthma: the atopic characteristics based on stringent asthma predictive index (S-API), the characteristics of bronchial hyper-responsiveness (BHR), and airway inflammation.\n\n\nMETHODS\nThe S-API was prospectively collected in 41 preschoolers (age 31.9 ± 17.4 months, range; 1-6 years) presenting to the ED with acute wheezing and compared to healthy preschoolers (n = 109) from our community (community control group). Thirty out of the 41 recruited preschoolers performed two sets of bronchial challenge tests (BCT)-(methacholine and adenosine) within 3 weeks and following 3 months of the acute event and compared to 30 consecutive ambulatory preschoolers, who performed BCT for diagnostic workup in our laboratory (ambulatory control group). On presentation, induced sputum (IS) was obtained from 22 of the 41 children.\n\n\nOUTCOMES\nPrimary: S-API, secondary: BCTs characteristics and percent eosinophils in IS.\n\n\nRESULTS\nSignificantly more wheezing preschoolers were S-API positive compared with the community control group: 20/41 (48.7%) versus 15/109 (13.7%, P < 0.001). All methacholine-BCTs-30/30 (100%) were positive compared with 13/14 (92.8%) in the ambulatory control group (P = 0.32). However, 23/27 (85.2%) were adenosine-BCT positive versus 3/17 (17.5%) in the ambulatory control group (P < 0.001). Diagnostic IS success rate was 18/22 (81.8%). Unexpectedly, 9/18 (50.0%) showed eosinophilia in the IS.\n\n\nCONCLUSIONS\nWheezing preschoolers presenting to the ED is a unique population with significantly higher rate of positive S-API and adenosine-BCT compared with controls and frequently (50%) express eosinophilic airway inflammation.",
"title": ""
},
{
"docid": "17bf75156f1ffe0daffd3dbc5dec5eb9",
"text": "Celebrities are admired, appreciated and imitated all over the world. As a natural result of this, today many brands choose to work with celebrities for their advertisements. It can be said that the more the brands include celebrities in their marketing communication strategies, the tougher the competition in this field becomes and they allocate a large portion of their marketing budget to this. Brands invest in celebrities who will represent them in order to build the image they want to create. This study aimed to bring under spotlight the perceptions of Turkish customers regarding the use of celebrities in advertisements and marketing communication and try to understand their possible effects on subsequent purchasing decisions. In addition, consumers’ reactions and perceptions were investigated in the context of the product-celebrity match, to what extent the celebrity conforms to the concept of the advertisement and the celebrity-target audience match. In order to achieve this purpose, a quantitative research was conducted as a case study concerning Mavi Jeans (textile company). Information was obtained through survey. The results from this case study are supported by relevant theories concerning the main subject. The most valuable result would be that instead of creating an advertisement around a celebrity in demand at the time, using a celebrity that fits the concept of the advertisement and feeds the concept rather than replaces it, that is celebrity endorsement, will lead to more striking and positive results. Keywords—Celebrity endorsement, product-celebrity match, advertising.",
"title": ""
},
{
"docid": "c36f2fd7bf8ef65bf443954e6be7107a",
"text": "Process mining is a tool to extract non-trivial and useful information from process execution logs. These so-called event logs (also called audit trails, or transaction logs) are the starting point for various discovery and analysis techniques that help to gain insight into certain characteristics of the process. In this paper we use a combination of process mining techniques to discover multiple perspectives (namely, the control-flow, data, performance, and resource perspective) of the process from historic data, and we integrate them into a comprehensive simulation model. This simulation model is represented as a Coloured Petri net (CPN) and can be used to analyze the process, e.g., evaluate the performance of different alternative designs. The discovery of simulation models is explained using a running example. Moreover, the approach has been applied in two case studies; the workflows in two different municipalities in the Netherlands have been analyzed using a combination of process mining and simulation. Furthermore, the quality of the CPN models generated for the running example and the two case studies has been evaluated by comparing the original logs with the logs of the generated models.",
"title": ""
}
] |
scidocsrr
|
8a519abac6a583ebb89fc1ac8d42a377
|
Midface: Clinical Anatomy and Regional Approaches with Injectable Fillers.
|
[
{
"docid": "f2e13ac41fc61bfc1b8e9c7171608518",
"text": "BACKGROUND\nThe exact anatomical cause of the tear trough remains undefined. This study was performed to identify the anatomical basis for the tear trough deformity.\n\n\nMETHODS\nForty-eight cadaveric hemifaces were dissected. With the skin over the midcheek intact, the tear trough area was approached through the preseptal space above and prezygomatic space below. The origins of the palpebral and orbital parts of the orbicularis oculi (which sandwich the ligament) were released meticulously from the maxilla, and the tear trough ligament was isolated intact and in continuity with the orbicularis retaining ligament. The ligaments were submitted for histologic analysis.\n\n\nRESULTS\nA true osteocutaneous ligament called the tear trough ligament was consistently found on the maxilla, between the palpebral and orbital parts of the orbicularis oculi, cephalad and caudal to the ligament, respectively. It commences medially, at the level of the insertion of the medial canthal tendon, just inferior to the anterior lacrimal crest, to approximately the medial-pupil line, where it continues laterally as the bilayered orbicularis retaining ligament. Histologic evaluation confirmed the ligamentous nature of the tear trough ligament, with features identical to those of the zygomatic ligament.\n\n\nCONCLUSIONS\nThis study clearly demonstrated that the prominence of the tear trough has its anatomical origin in the tear trough ligament. This ligament has not been isolated previously using standard dissection, but using the approach described, the tear trough ligament is clearly seen. The description of this ligament sheds new light on considerations when designing procedures to address the tear trough and the midcheek.",
"title": ""
}
] |
[
{
"docid": "098b9b80d27fddd6407ada74a8fd4590",
"text": "We have developed a 1.55-μm 40 Gbps electro-absorption modulator laser (EML)-based transmitter optical subassembly (TOSA) using a novel flexible printed circuit (FPC). The return loss at the junctions of the printed circuit board and the FPC, and of the FPC and the ceramic feedthrough connection was held better than 20 dB at up to 40 GHz by a newly developed three-layer FPC. The TOSA was fabricated and demonstrated a mask margin of >16% and a path penalty of <;0.63 dB for a 43 Gbps signal after 2.4-km SMF transmission over the entire case temperature range from -5° to 80 °C, demonstrating compliance with ITU-T G.693. These results are comparable to coaxial connector type EML modules. This TOSA is expected to be a strong candidate for 40 Gbps EML modules with excellent operating characteristics, economy, and a small footprint.",
"title": ""
},
{
"docid": "93efc06a282a12fb65038381cf390e19",
"text": "Linked Open Data (LOD) comprises an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced or extracted data of often relatively low quality. We present a methodology for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. We argue that vocabularies, ontologies and knowledge bases should be accompanied by a number of test cases, which help to ensure a basic level of quality. We present a methodology for assessing the quality of linked data resources, based on a formalization of bad smells and data quality problems. Our formalization employs SPARQL query templates, which are instantiated into concrete quality test case queries. Based on an extensive survey, we compile a comprehensive library of data quality test case patterns. We perform automatic test case instantiation based on schema constraints or semi-automatically enriched schemata and allow the user to generate specific test case instantiations that are applicable to a schema or dataset. We provide an extensive evaluation of five LOD datasets, manual test case instantiation for five schemas and automatic test case instantiations for all available schemata registered with Linked Open Vocabularies (LOV). One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.",
"title": ""
},
{
"docid": "da02328df767c4046a352e999914bc20",
"text": "We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal_brain_synthesis.",
"title": ""
},
{
"docid": "96b1688b19bf71e8f1981d9abe52fc2c",
"text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.",
"title": ""
},
{
"docid": "3a92c6ba669ae5002979e4347434a120",
"text": "This paper highlights a 14nm Analog and RF technology based on a logic FinFET platform for the first time. An optimized RF device layout shows excellent Ft/Fmax of (314GHz/180GHz) and (285GHz/140GHz) for NFET and PFET respectively. A higher PFET RF performance compared to 28nm technology is due to a source/drain stressor mobility improvement. A benefit of better FinFET channel electrostatics can be seen in the self-gain (Gm/Gds), which shows a significant increase to 40 and 34 for NFET and PFET respectively. Superior 1/f noise of 17/35 f(V∗μm)2/Hz @ 1KHz for N/PFET respectively is also achieved. To extend further low voltage operation and power saving, ultra-low Vt devices are also developed. Furthermore, a deep N-well (triple well) process is introduced to improve the ultra-low signal immunity from substrate noise, while offering useful devices like VNPN and high breakdown voltage deep N-well diodes. A superior Ft/Fmax, high self-gain, low 1/f noise and substrate isolation characteristics truly extend the capability of the 14nm FinFETs for analog and RF applications.",
"title": ""
},
{
"docid": "072a203514eb53db7aa9aaa55c6745d8",
"text": "The possibility to estimate accurately the subsurface electric properties from ground-penetrating radar (GPR) signals using inverse modeling is obstructed by the appropriateness of the forward model describing the GPR subsurface system. In this paper, we improved the recently developed approach of Lambot et al. whose success relies on a stepped-frequency continuous-wave (SFCW) radar combined with an off-ground monostatic transverse electromagnetic horn antenna. This radar configuration enables realistic and efficient forward modeling. We included in the initial model: 1) the multiple reflections occurring between the antenna and the soil surface using a positive feedback loop in the antenna block diagram and 2) the frequency dependence of the electric properties using a local linear approximation of the Debye model. The model was validated in laboratory conditions on a tank filled with a two-layered sand subject to different water contents. Results showed remarkable agreement between the measured and modeled Green's functions. Model inversion for the dielectric permittivity further demonstrated the accuracy of the method. Inversion for the electric conductivity led to less satisfactory results. However, a sensitivity analysis demonstrated the good stability properties of the inverse solution and put forward the necessity to reduce the remaining clutter by a factor 10. This may partly be achieved through a better characterization of the antenna transfer functions and by performing measurements in an environment without close extraneous scatterers.",
"title": ""
},
{
"docid": "04c029380ae73b75388ab02f901fda7d",
"text": "We present a novel method to solve image analogy problems [3]: it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set. Therefore, we call the method Conditional Analogy Generative Adversarial Network (CAGAN), as it is based on adversarial training and employs deep convolutional neural networks. An especially interesting application of that technique is automatic swapping of clothing on fashion model photos. Our work has the following contributions. First, the definition of the end-to-end trainable CAGAN architecture, which implicitly learns segmentation masks without expensive supervised labeling data. Second, experimental results show plausible segmentation masks and often convincing swapped images, given the target article. Finally, we discuss the next steps for that technique: neural network architecture improvements and more advanced applications.",
"title": ""
},
{
"docid": "35c18e570a6ab44090c1997e7fe9f1b4",
"text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.",
"title": ""
},
{
"docid": "ba755cab267998a3ea813c0f46c8c99c",
"text": "In this paper, we developed a deep neural network (DNN) that learns to solve simultaneously the three tasks of the cQA challenge proposed by the SemEval-2016 Task 3, i.e., question-comment similarity, question-question similarity and new question-comment similarity. The latter is the main task, which can exploit the previous two for achieving better results. Our DNN is trained jointly on all the three cQA tasks and learns to encode questions and comments into a single vector representation shared across the multiple tasks. The results on the official challenge test set show that our approach produces higher accuracy and faster convergence rates than the individual neural networks. Additionally, our method, which does not use any manual feature engineering, approaches the state of the art established with methods that make heavy use of it.",
"title": ""
},
{
"docid": "754fb355da63d024e3464b4656ea5e8d",
"text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.",
"title": ""
},
{
"docid": "c4144108562408238992d7529cf77ad7",
"text": "This article presents an approach to context-aware services for “smart buildings” based on Web and Semantic Web techniques. The services striven for are first described, then their realization using Web and Semantic Web services is explained. Finally, advantages of the approach are stressed.",
"title": ""
},
{
"docid": "3f157067ce2d5d6b6b4c9d9faaca267b",
"text": "The rise of network forms of organization is a key consequence of the ongoing information revolution. Business organizations are being newly energized by networking, and many professional militaries are experimenting with flatter forms of organization. In this chapter, we explore the impact of networks on terrorist capabilities, and consider how this development may be associated with a move away from emphasis on traditional, episodic efforts at coercion to a new view of terror as a form of protracted warfare. Seen in this light, the recent bombings of U.S. embassies in East Africa, along with the retaliatory American missile strikes, may prove to be the opening shots of a war between a leading state and a terror network. We consider both the likely context and the conduct of such a war, and offer some insights that might inform policies aimed at defending against and countering terrorism.",
"title": ""
},
{
"docid": "cd2fcc3e8ba9fce3db77c4f1e04ad287",
"text": "Technological advances are being made to assist humans in performing ordinary tasks in everyday settings. A key issue is the interaction with objects of varying size, shape, and degree of mobility. Autonomous assistive robots must be provided with the ability to process visual data in real time so that they can react adequately for quickly adapting to changes in the environment. Reliable object detection and recognition is usually a necessary early step to achieve this goal. In spite of significant research achievements, this issue still remains a challenge when real-life scenarios are considered. In this article, we present a vision system for assistive robots that is able to detect and recognize objects from a visual input in ordinary environments in real time. The system computes color, motion, and shape cues, combining them in a probabilistic manner to accurately achieve object detection and recognition, taking some inspiration from vision science. In addition, with the purpose of processing the input visual data in real time, a graphical processing unit (GPU) has been employed. The presented approach has been implemented and evaluated on a humanoid robot torso located at realistic scenarios. For further experimental validation, a public image repository for object recognition has been used, allowing a quantitative comparison with respect to other state-of-the-art techniques when realworld scenes are considered. Finally, a temporal analysis of the performance is provided with respect to image resolution and the number of target objects in the scene.",
"title": ""
},
{
"docid": "fd256fe226d32fab1fca93be1d08ed32",
"text": "Data security in the cloud is a big concern that blocks the widespread use of the cloud for relational data management. First, to ensure data security, data confidentiality needs to be provided when data resides in storage as well as when data is dynamically accessed by queries. Prior works on query processing on encrypted data did not provide data confidentiality guarantees in both aspects. Tradeoff between secrecy and efficiency needs to be made when satisfying both aspects of data confidentiality while being suitable for practical use. Second, to support common relational data management functions, various types of queries such as exact queries, range queries, data updates, insertion and deletion should be supported. To address these issues, this paper proposes a comprehensive framework for secure and efficient query processing of relational data in the cloud. Our framework ensures data confidentiality using a salted IDA encoding scheme and column-access-via-proxy query processing primitives, and ensures query efficiency using matrix column accesses and a secure B+-tree index. In addition, our framework provides data availability and integrity. We establish the security of our proposal by a detailed security analysis and demonstrate the query efficiency of our proposal through an experimental evaluation.",
"title": ""
},
{
"docid": "595cb7698c38b9f5b189ded9d270fe69",
"text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.",
"title": ""
},
{
"docid": "b3f5d9335cccf62797c86b76fa2c9e7e",
"text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017",
"title": ""
},
{
"docid": "5551eb3819e33d5eeaadf3ebb636d961",
"text": "There are many problems in security of Internet of Things (IOT) crying out for solutions, such as RFID tag security, wireless security, network transmission security, privacy protection, information processing security. This article is based on the existing researches of network security technology. And it provides a new approach for researchers in certain IOT application and design, through analyzing and summarizing the security of ITO from various angles.",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "4ecacdd24a9e615f21535a026d9c2ab5",
"text": "Text superimposed on the video frames provides supplemental but important information for video indexing and retrieval. Many efforts have been made for videotext detection and recognition (Video OCR). The main difficulties of video OCR are the low resolution and the background complexity. In this paper, we present efficient schemes to deal with the second difficulty by sufficiently utilizing multiple frames that contain the same text to get every clear word from these frames. Firstly, we use multiple frame verification to reduce text detection false alarms. And then choose those frames where the text is most likely clear, thus it is more possible to be correctly recognized. We then detect and joint every clear text block from those frames to form a clearer “man-made” frame. Later we apply a block-based adaptive thresholding procedure on these “man-made” frames. Finally, the binarized frames are sent to OCR engine for recognition. Experiments show that the word recognition rate has been increased over 28% by these methods.",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] |
scidocsrr
|
7673e923b3dde4b0e791e34ba5d4441c
|
SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints
|
[
{
"docid": "f6b8ad20e1afd5d8aa63b16042d59f99",
"text": "In the domain of sequence modelling, Recurrent Neural Networks (RNN) have been capable of achieving impressive results in a variety of application areas including visual question answering, part-of-speech tagging and machine translation. However this success in modelling short term dependencies has not successfully transitioned to application areas such as trajectory prediction, which require capturing both short term and long term relationships. In this paper, we propose a Tree Memory Network (TMN) for modelling long term and short term relationships in sequence-to-sequence mapping problems. The proposed network architecture is composed of an input module, controller and a memory module. In contrast to related literature, which models the memory as a sequence of historical states, we model the memory as a recursive tree structure. This structure more effectively captures temporal dependencies across both short term and long term sequences using its hierarchical structure. We demonstrate the effectiveness and flexibility of the proposed TMN in two practical problems, aircraft trajectory modelling and pedestrian trajectory modelling in a surveillance setting, and in both cases we outperform the current state-of-the-art. Furthermore, we perform an in depth analysis on the evolution of the memory module content over time and provide visual evidence on how the proposed TMN is able to map both long term and short term relationships efficiently via a hierarchi1 ar X iv :1 70 3. 04 70 6v 1 [ cs .L G ] 1 2 M ar 2 01 7",
"title": ""
}
] |
[
{
"docid": "63a29e42a28698339d7d1f5e1a2fabcc",
"text": "(n) k edges have equal probabilities to be chosen as the next one . We shall 2 study the \"evolution\" of such a random graph if N is increased . In this investigation we endeavour to find what is the \"typical\" structure at a given stage of evolution (i . e . if N is equal, or asymptotically equal, to a given function N(n) of n) . By a \"typical\" structure we mean such a structure the probability of which tends to 1 if n -* + when N = N(n) . If A is such a property that lim Pn,N,(n ) ( A) = 1, we shall say that „almost all\" graphs Gn,N(n) n--possess this property .",
"title": ""
},
{
"docid": "55160cc3013b03704555863c710e6d21",
"text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.",
"title": ""
},
{
"docid": "363e799cd63907ce64ad405cfdff3b56",
"text": "This paper discusses visual methods that can be used to understand and interpret the results of classification using support vector machines (SVM) on data with continuous real-valued variables. SVM induction algorithms build pattern classifiers by identifying a maximal margin separating hyperplane from training examples in high dimensional pattern spaces or spaces induced by suitable nonlinear kernel transformations over pattern spaces. SVM have been demonstrated to be quite effective in a number of practical pattern classification tasks. Since the separating hyperplane is defined in terms of more than two variables it is necessary to use visual techniques that can navigate the viewer through high-dimensional spaces. We demonstrate the use of projection-based tour methods to gain useful insights into SVM classifiers with linear kernels on 8-dimensional data.",
"title": ""
},
{
"docid": "580e0cc120ea9fd7aa9bb0a8e2a73cb3",
"text": "In the emerging field of micro-blogging and social communication services, users post millions of short messages every day. Keeping track of all the messages posted by your friends and the conversation as a whole can become tedious or even impossible. In this paper, we presented a study on automatically clustering and classifying Twitter messages, also known as “tweets”, into different categories, inspired by the approaches taken by news aggregating services like Google News. Our results suggest that the clusters produced by traditional unsupervised methods can often be incoherent from a topical perspective, but utilizing a supervised methodology that utilize the hash-tags as indicators of topics produce surprisingly good results. We also offer a discussion on temporal effects of our methodology and training set size considerations. Lastly, we describe a simple method of finding the most representative tweet in a cluster, and provide an analysis of the results.",
"title": ""
},
{
"docid": "60ac1fa826816d39562104849fff8f46",
"text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.",
"title": ""
},
{
"docid": "0284d90b11d02f727957e231d3d1a781",
"text": "9 Although adherence to project schedules and budgets is most highly valued by project owners, 10 more than 53% of typical construction projects are behind schedule and more than 66% suffer 11 from cost overruns, partly due to inability to accurately capture construction progress. To address 12 these challenges, this paper presents new geometryand appearance-based reasoning methods for 13 detecting construction progress, which has the potential to provide more frequent progress mea14 sures using visual data that are already being collected by general contractors. The initial step 15 of geometry-based filtering detects the state of construction of Building Information Modeling 16 (BIM) elements (e.g. in-progress, completed). The next step of appearance-based reasoning cap17 tures operation-level activities by recognizing different material types. Two methods have been 18 investigated for the latter step: a texture-based reasoning for image-based 3D point clouds and 19 color-based reasoning for laser scanned point clouds. This paper presents two case studies for 20 each reasoning approach for validating the proposed methods. The results demonstrate the effec21 tiveness and practical significances of the proposed methods. 22",
"title": ""
},
{
"docid": "040e5e800895e4c6f10434af973bec0f",
"text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "d0496fb5740bfc9da308087425c033e2",
"text": "Abstract: The measurement accuracy for heart rate or SpO2 using photoplethysmography (PPG) is influenced by how well the noise from motion artifacts and other sources can be removed. Eliminating the motion artifacts is particularly difficult since its frequency band overlaps that of the basic PPG signal. Therefore, we propose the Periodic Moving Average Filter (PMAF) to remove motion artifacts. The PMAF is based on the quasi-periodicity of the PPG signals. After segmenting the PPG signal on periodic boundaries, we average the m samples of each period. As a result, we remove the motion artifacts well without the deterioration of the characteristic point.",
"title": ""
},
{
"docid": "57e95050bcaf50fdb6c7a5390382a1b7",
"text": "We compare our own embodied conversational agent (ECA) scheme, BotCom, with seven other complex Internet-based ECAs according to recentlypublished information about them, and highlight some important attributes that have received little attention in the construction of realistic ECAs. BotCom incorporates the use of emotions, humor and complex information services. We cover issues that are likely to be of greatest interest for developers of ECAs that, like BotCom, are directed towards intensive commercial use. 1 Using ECAs on the Internet Many embodied conversational agents (ECAs) are targeting the Internet. However, systems that are bound to this global network not only benefit from several advantages of the huge amount of accessible information provided by this medium, but inherit its common problems as well. Among those are the difficulty of relevant search, complexity of available information, unstructuredness, bandwidth limitations etc. So, what are the main arguments in favor of deploying an ECA on the Internet? First of all, the preference for real-time events, real-time information flow, expresses an innate need of mankind. Internet ECAs have this advantage as opposed to any other on-line customer-company communication method, such as web pages, email, guest books, etc. In addition, secondary orality, the communication by dialogues as opposed to monologues, is also far more effective when dealing with humans [5]. Furthemore, even though ECAs and simpler chatterbots may give wrong answers to certain questions, they create some sort of representation of themselves in the customers mind [13]. An ordinary website can be considered not only less interactive than one with an ECA, but the way it operates is closer to monologues than to dialogues. We have developed BotCom, a fully working prototype system, as part of a research project. It is capable of chatting with users about different topics as well as displaying synchronized affective feedback based on a complex emotional state generator, GALA. Moreover, it has a feature of connecting to various information T. Rist et al. (Eds.): IVA 2003, LNAI 2792, pp. 5-12, 2003. Springer-Verlag Berlin Heidelberg 2003 6 Gábor Tatai et al. sources and search engines thus enabling an easily scalable knowledge base. Its primary use will be interactive website navigation, entertainment, marketing and education. BotCom is currently being introduced into commercial use. There is no space to discuss all the features and interesting implementation experiences with our BotCom ECA in this paper. Therefore we focus on some highlights where, we think, our ECA is special or when a theoretical or practical observation has proved to be particularly useful, so that others might benefit from these as well. 2 Comparison of Popular Internet Chatterbots During design and implementation we have analyzed, evaluated and constantly monitored existing ECAs in order to reinforce and validate our development approach. We did not follow only one methodology; several of them ([2], [4], [12], [15]) served as a basis of our own compound method, as, in spite of the similarities, overlaps frequently occurred and all of them contained unique evaluation variables. We studied the following (either commercial or award-wining) chatbots (see Table 1. for the results): Ultra Hal Assistant 4.5 (Zabaware, Inc., http://www.zabaware.com/assistant) Ramona (KurzweiAI.net, http://www.kurzweilai.net/) Elbot (Kiwilogic, http://www.elbot.com/) Ella (Kevin L. Copple, http://www.ellaz.com/EllaASPLoebner/Loebner2002.aspx) Nicole (NativeMinds, http://an1-sj.nativeminds.com/demos_default.html) Lucy (Artificial Life, http://www.artificial-life.com/v5/website.php) Julia (Conversive, http://www.vperson.com) 2.1 Visual Appearance In most cases visualization is typically solved by 2D graphics focusing only on the face, or photo-realistic schemes of still pictures (photos). Some tend to limit animation to only certain parts of the body (e.g. eyes, lips, eye-brows, chin), the roles of which are considered to be important in communication [11]. 3D animations are also applied occasionally, for instance in Lucys case. Despite the more lifelike and realistic appearance of 3D real-time rendered graphics, there is no underpinning evidence of differences in expressiveness amongst cartoons, photos, movies etc., though various studies confirm that users assume high-quality animated ECAs to be more intelligent [15]. Aiko, a female instance of BotCom, runs on the users web interface. The representation of her reactions and emotions is implemented through a 3D pre-processed (pre-rendered), realistic animation. Since the face and gestures provide the significant secondary communication channels [8], only the head, the torso (shoulders, arms) and occasionally the hands were visualized. To be able to diversify and refine the reactions, the collection of animations is extendable, but the right balance should be kept",
"title": ""
},
{
"docid": "2ff08c8505e7d68304b63c6942feb837",
"text": "This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus.",
"title": ""
},
{
"docid": "eeff1f2e12e5fc5403be8c2d7ca4d10c",
"text": "Optical Character Recognition (OCR) systems have been effectively developed for the recognition of printed script. The accuracy of OCR system mainly depends on the text preprocessing and segmentation algorithm being used. When the document is scanned it can be placed in any arbitrary angle which would appear on the computer monitor at the same angle. This paper addresses the algorithm for correction of skew angle generated in scanning of the text document and a novel profile based method for segmentation of printed text which separates the text in document image into lines, words and characters. Keywords—Skew correction, Segmentation, Text preprocessing, Horizontal Profile, Vertical Profile.",
"title": ""
},
{
"docid": "2e6623aa13ca5a047d888612c9a8e22a",
"text": "We present a hydro-elastic actuator that has a linear spring intentionally placed in series between the hydraulic piston and actuator output. The spring strain is measured to get an accurate estimate of force. This measurement alone is used in PI feedback to control the force in the actuator. The spring allows for high force fidelity, good force control, minimum impedance, and large dynamic range. A third order linear actuator model is broken into two fundamental cases: fixed load – high force (forward transfer function), and free load – zero force (impedance). These two equations completely describe the linear characteristics of the actuator. This model is presented with dimensional analysis to allow for generalization. A prototype actuator that demonstrates force control and low impedance is also presented. Dynamic analysis of the prototype actuator correlates well with the linear mathematical model. This work done with hydraulics is an extension from previous work done with electro-mechanical actuators. Keywords— Series Elastic Actuator, Force Control, Hydraulic Force Control, Biomimetic Robots",
"title": ""
},
{
"docid": "a0ffe6a1e991a7e34b3256560f11889f",
"text": "This paper presents a GPU-based stereo matching system with good performance in both accuracy and speed. The matching cost volume is initialized with an AD-Census measure, aggregated in dynamic cross-based regions, and updated in a scanline optimization framework to produce the disparity results. Various errors in the disparity results are effectively handled in a multi-step refinement process. Each stage of the system is designed with parallelism considerations such that the computations can be accelerated with CUDA implementations. Experimental results demonstrate the accuracy and the efficiency of the system: currently it is the top performer in the Middlebury benchmark, and the results are achieved on GPU within 0.1 seconds. We also provide extra examples on stereo video sequences and discuss the limitations of the system.",
"title": ""
},
{
"docid": "75177326b8408f755100bf86e1f8bd90",
"text": "We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.",
"title": ""
},
{
"docid": "bdda2d3eef1a5040d626419c10f18d36",
"text": "This paper presents a novel hybrid permanent magnet and wound field synchronous machine geometry with a displaced reluctance axis. This concept is known for improving motor operation performance and efficiency at the cost of an inferior generator operation. To overcome this disadvantage, the proposed machine geometry is capable of inverting the magnetic asymmetry dynamically. Thereby, the positive effects of the magnetic asymmetry can be used in any operation point. This paper examines the theoretical background and shows the benefits of this geometry by means of simulation and measurement. The prototype achieves an increase in torque of 4 % and an increase in efficiency of 2 percentage points over a conventional electrically excited synchronous machine.",
"title": ""
},
{
"docid": "30646575fc88d8bbd1a70b1fca5e4afc",
"text": "During long standing hyperglycaemic state in diabetes mellitus, glucose forms covalent adducts with the plasma proteins through a non-enzymatic process known as glycation. Protein glycation and formation of advanced glycation end products (AGEs) play an important role in the pathogenesis of diabetic complications like retinopathy, nephropathy, neuropathy, cardiomyopathy along with some other diseases such as rheumatoid arthritis, osteoporosis and aging. Glycation of proteins interferes with their normal functions by disrupting molecular conformation, altering enzymatic activity, and interfering with receptor functioning. AGEs form intra- and extracellular cross linking not only with proteins, but with some other endogenous key molecules including lipids and nucleic acids to contribute in the development of diabetic complications. Recent studies suggest that AGEs interact with plasma membrane localized receptors for AGEs (RAGE) to alter intracellular signaling, gene expression, release of pro-inflammatory molecules and free radicals. The present review discusses the glycation of plasma proteins such as albumin, fibrinogen, globulins and collagen to form different types of AGEs. Furthermore, the role of AGEs in the pathogenesis of diabetic complications including retinopathy, cataract, neuropathy, nephropathy and cardiomyopathy is also discussed.",
"title": ""
},
{
"docid": "4f9fbf76cc8dcc57672f91b853af7f7f",
"text": "An in vivo biosensor is a technology in development that will assess the biological activity of cancers to individualise external beam radiotherapy. Inserting such technology into the human body creates cybernetic organisms; a cyborg that is a human-machine hybrid. There is a gap in knowledge relating to patient willingness to allow automated technology to be embedded and to become cyborg. There is little agreement around what makes a cyborg and less understanding of the variation in the cyborgisation process. Understanding the viewpoint of possible beneficiaries addresses such gaps. There are currently three versions of 'cyborg' in the literature (i) a critical feminist STS concept to destabilise power inherent in dualisms, (ii) an extreme version of the human/machine in science-fiction that emphasises the 'man' in human and (iii) a prediction of internal physiological adaptation required for future space exploration. Interview study findings with 12 men in remission from prostate cancer show a fourth version can be used to describe current and future sub-groups of the population; 'everyday cyborgs'. For the everyday cyborg the masculine cyborg status found in the fictionalised human-machine related to issues of control of the cancer. This was preferred to the felt stigmatisation of being a 'leaker and bleeder'. The willingness to become cyborg was matched with a having to get used to the everyday cyborg's technological adaptations and risks. It is crucial to explore the everyday cyborg's sometimes ambivalent viewpoint. The everyday cyborg thus adds the dimension of participant voice currently missing in existing cyborg literatures and imaginations.",
"title": ""
},
{
"docid": "ec5bdd52fa05364923cb12b3ff25a49f",
"text": "A system to prevent subscription fraud in fixed telecommunications with high impact on long-distance carriers is proposed. The system consists of a classification module and a prediction module. The classification module classifies subscribers according to their previous historical behavior into four different categories: subscription fraudulent, otherwise fraudulent, insolvent and normal. The prediction module allows us to identify potential fraudulent customers at the time of subscription. The classification module was implemented using fuzzy rules. It was applied to a database containing information of over 10,000 real subscribers of a major telecom company in Chile. In this database, a subscription fraud prevalence of 2.2% was found. The prediction module was implemented as a multilayer perceptron neural network. It was able to identify 56.2% of the true fraudsters, screening only 3.5% of all the subscribers in the test set. This study shows the feasibility of significantly preventing subscription fraud in telecommunications by analyzing the application information and the customer antecedents at the time of application. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3567ec67dc263a6585e8d3af62b1d9f1",
"text": "SemStim is a graph-based recommendation algorithm which is based on Spreading Activation and adds targeted activation and duration constraints. SemStim is not affected by data sparsity, the cold-start problem or data quality issues beyond the linking of items to DBpedia. The overall results show that the performance of SemStim for the diversity task of the challenge is comparable to the other participants, as it took 3rd place out of 12 participants with 0.0413 F1@20 and 0.476 ILD@20. In addition, as SemStim has been designed for the requirements of cross-domain recommendations with different target and source domains, this shows that SemStim can also provide competitive single-domain recommendations.",
"title": ""
}
] |
scidocsrr
|
99d250be797b0269e047d9375aae37b6
|
Unleashing Use-Before-Initialization Vulnerabilities in the Linux Kernel Using Targeted Stack Spraying
|
[
{
"docid": "ffa25551d331651d80f8d91f59a441c0",
"text": "Since vulnerabilities in Linux kernel are on the increase, attackers have turned their interests into related exploitation techniques. However, compared with numerous researches on exploiting use-after-free vulnerabilities in the user applications, few efforts studied how to exploit use-after-free vulnerabilities in Linux kernel due to the difficulties that mainly come from the uncertainty of the kernel memory layout. Without specific information leakage, attackers could only conduct a blind memory overwriting strategy trying to corrupt the critical part of the kernel, for which the success rate is negligible.\n In this work, we present a novel memory collision strategy to exploit the use-after-free vulnerabilities in Linux kernel reliably. The insight of our exploit strategy is that a probabilistic memory collision can be constructed according to the widely deployed kernel memory reuse mechanisms, which significantly increases the success rate of the attack. Based on this insight, we present two practical memory collision attacks: An object-based attack that leverages the memory recycling mechanism of the kernel allocator to achieve freed vulnerable object covering, and a physmap-based attack that takes advantage of the overlap between the physmap and the SLAB caches to achieve a more flexible memory manipulation. Our proposed attacks are universal for various Linux kernels of different architectures and could successfully exploit systems with use-after-free vulnerabilities in kernel. Particularly, we achieve privilege escalation on various popular Android devices (kernel version>=4.3) including those with 64-bit processors by exploiting the CVE-2015-3636 use-after-free vulnerability in Linux kernel. To our knowledge, this is the first generic kernel exploit for the latest version of Android. Finally, to defend this kind of memory collision, we propose two corresponding mitigation schemes.",
"title": ""
},
{
"docid": "3cc4ba42c0174aa68dae5dc2ef928970",
"text": "Software security practitioners are often torn between choosing performance or security. In particular, OS kernels are sensitive to the smallest performance regressions. This makes it difficult to develop innovative kernel hardening mechanisms: they may inevitably incur some run-time performance overhead. Here, we propose building each kernel function with and without hardening, within a single split kernel. In particular, this allows trusted processes to be run under unmodified kernel code, while system calls of untrusted processes are directed to the hardened kernel code. We show such trusted processes run with no overhead when compared to an unmodified kernel. This allows deferring the decision of making use of hardening to the run-time. This means kernel distributors, system administrators and users can selectively enable hardening according to their needs: we give examples of such cases. Although this approach cannot be directly applied to arbitrary kernel hardening mechanisms, we show cases where it can. Finally, our implementation in the Linux kernel requires few changes to the kernel sources and no application source changes. Thus, it is both maintainable and easy to use.",
"title": ""
},
{
"docid": "818ecd4a961de99bd90e53a41dabff7d",
"text": "Lack of memory safety in C is the root cause of a multitude of serious bugs and security vulnerabilities. Numerous software-only and hardware-based schemes have been proposed to enforce memory safety. Among these approaches, pointer-based checking, which maintains per-pointer metadata in a disjoint metadata space, has been recognized as providing comprehensive memory safety. Software approaches for pointer-based checking have high performance overheads. In contrast, hardware approaches introduce a myriad of hardware structures and widgets to mitigate those performance overheads.\n This paper proposes WatchdogLite, an ISA extension that provides hardware acceleration for a compiler implementation of pointer-based checking. This division of labor between the compiler and the hardware allows for hardware acceleration while using only preexisting architectural registers. By leveraging the compiler to identify pointers, perform check elimination, and insert the new instructions, this approach attains performance similar to prior hardware-intensive approaches without adding any hardware structures for tracking metadata.",
"title": ""
}
] |
[
{
"docid": "01e6823392427274c4bd50cc1bf6bf6c",
"text": "The neocortex has a high capacity for plasticity. To understand the full scope of this capacity, it is essential to know how neurons choose particular partners to form synaptic connections. By using multineuron whole-cell recordings and confocal microscopy we found that axons of layer V neocortical pyramidal neurons do not preferentially project toward the dendrites of particular neighboring pyramidal neurons; instead, axons promiscuously touch all neighboring dendrites without any bias. Functional synaptic coupling of a small fraction of these neurons is, however, correlated with the existence of synaptic boutons at existing touch sites. These data provide the first direct experimental evidence for a tabula rasa-like structural matrix between neocortical pyramidal neurons and suggests that pre- and postsynaptic interactions shape the conversion between touches and synapses to form specific functional microcircuits. These data also indicate that the local neocortical microcircuit has the potential to be differently rewired without the need for remodeling axonal or dendritic arbors.",
"title": ""
},
{
"docid": "8a607387d2803985d28d386258ba7fae",
"text": "based on cross-cultural research. This approach expands earlier theoretical interpretations offered for the significance of cave art that fail to account for central aspects of cave art material. Clottes & Lewis-Williams (1998), Smith (1992) and Ryan (1999) concur in the interpretation that neurologically-based shamanic practices were central to cave art (cf. Lewis-Williams 1997a,b). Clottes & Lewis-Williams suggest that, in spite of the temporal distance, we have better access to Upper Palaeolithic peoples’ religious experiences than other aspects of their lives because of the neuropsychological basis of those experiences. The commonality in the experiences of shamanism across space and time provides a basis for forming ‘some idea of the social and mental context out of which Upper Palaeolithic religion and art came’ (Clottes & LewisMichael Winkelman",
"title": ""
},
{
"docid": "d0e6be45234a23ed1440f1a0e7bb1460",
"text": "Sparse Linear Methods (SLIM) are state-of-the-art recommendation approaches based on matrix factorization, which rely on a regularized !1-norm and !2-norm optimization –an alternative optimization problem to the traditional Frobenious norm. Although they have shown outstanding performance in Top-N recommendation, existent works have not yet analyzed some inherent assumptions that can have an important effect on the performance of these algorithms. In this paper, we attempt to improve the performance of SLIM by proposing a generalized formulation of the aforementioned assumptions. Instead of directly learning a sparse representation of the user-item matrix, we (i) learn the latent factors’ matrix of the users and the items via a traditional matrix factorization approach, and then (ii) reconstruct the latent user or item matrix via prototypes which are learned using sparse coding, an alternative SLIM commonly used in the image processing domain. The results show that by tuning the parameters of our generalized model we are able to outperform SLIM in several Top-N recommendation experiments conducted on two different datasets, using both nDCG and nDCG@10 as evaluation metrics. These preliminary results, although not conclusive, indicate a promising line of research to improve the performance of SLIM recommendation.",
"title": ""
},
{
"docid": "211b858db72c962efaedf66f2ed9479d",
"text": "Along with the rapid development of information and communication technologies, educators are trying to keep up with the dramatic changes in our electronic environment. These days mobile technology, with popular devices such as iPhones, Android phones, and iPads, is steering our learning environment towards increasingly focusing on mobile learning or m-Learning. Currently, most interfaces employ keyboards, mouse or touch technology, but some emerging input-interfaces use voiceor marker-based gesture recognition. In the future, one of the cutting-edge technologies likely to be used is robotics. Robots are already being used in some classrooms and are receiving an increasing level of attention. Robots today are developed for special purposes, quite similar to personal computers in their early days. However, in the future, when mass production lowers prices, robots will bring about big changes in our society. In this column, the author focuses on educational service robots. Educational service robots for language learning and robot-assisted language learning (RALL) will be introduced, and the hardware and software platforms for RALL will be explored, as well as implications for future research.",
"title": ""
},
{
"docid": "ac2a980bb528c6747062195017f155c0",
"text": "Dimension reduction is commonly defined as the process of mapping high-dimensional data to a lower-dimensional embedding. Applications of dimension reduction include, but are not limited to, filtering, compression, regression, classification, feature analysis, and visualization. We review methods that compute a point-based visual representation of high-dimensional data sets to aid in exploratory data analysis. The aim is not to be exhaustive but to provide an overview of basic approaches, as well as to review select state-of-the-art methods. Our survey paper is an introduction to dimension reduction from a visualization point of view. Subsequently, a comparison of state-of-the-art methods outlines relations and shared research foci. 1998 ACM Subject Classification G.3 Multivariate Statistics; I.2.6 Learning; G.1.2 Approximation",
"title": ""
},
{
"docid": "dec78cff9fa87a3b51fc32681ba39a08",
"text": "Alkaline saponification is often used to remove interfering chlorophylls and lipids during carotenoids analysis. However, saponification also hydrolyses esterified carotenoids and is known to induce artifacts. To avoid carotenoid artifact formation during saponification, Larsen and Christensen (2005) developed a gentler and simpler analytical clean-up procedure involving the use of a strong basic resin (Ambersep 900 OH). They hypothesised a saponification mechanism based on their Liquid Chromatography-Photodiode Array (LC-PDA) data. In the present study, we show with LC-PDA-accurate mass-Mass Spectrometry that the main chlorophyll removal mechanism is not based on saponification, apolar adsorption or anion exchange, but most probably an adsorption mechanism caused by H-bonds and dipole-dipole interactions. We showed experimentally that esterified carotenoids and glycerolipids were not removed, indicating a much more selective mechanism than initially hypothesised. This opens new research opportunities towards a much wider scope of applications (e.g. the refinement of oils rich in phytochemical content).",
"title": ""
},
{
"docid": "abda1b483d6f874fecba6001fef4ada1",
"text": "To what extent do online discussion spaces expose participants to political talk and to cross-cutting political views in particular? Drawing on a representative national sample of over 1000 Americans reporting participation in chat rooms or message boards, we examine the types of online discussion spaces that create opportunities for cross-cutting political exchanges. Our findings suggest that the potential for deliberation occurs primarily in online groups where politics comes up only incidentally, but is not the central purpose of the discussion space. We discuss the implications of our findings for the contributions of the Internet to cross-cutting political discourse.",
"title": ""
},
{
"docid": "300485eefc3020135cdaa31ad36f7462",
"text": "The number of cyber threats is constantly increasing. In 2013, 200,000 malicious tools were identified each day by antivirus vendors. This figure rose to 800,000 per day in 2014 and then to 1.8 million per day in 2016! The bar of 3 million per day will be crossed in 2017. Traditional security tools (mainly signature-based) show their limits and are less and less effective to detect these new cyber threats. Detecting never-seen-before or zero-day malware, including ransomware, efficiently requires a new approach in cyber security management. This requires a move from signature-based detection to behavior-based detection. We have developed a data breach detection system named CDS using Machine Learning techniques which is able to identify zero-day malware by analyzing the network traffic. In this paper, we present the capability of the CDS to detect zero-day ransomware, particularly WannaCry.",
"title": ""
},
{
"docid": "53f9f38400266da916dd10200b6b4df1",
"text": "Time series prediction has been studied in a variety of domains. However, it is still challenging to predict future series given historical observations and past exogenous data. Existing methods either fail to consider the interactions among different components of exogenous variables which may affect the prediction accuracy, or cannot model the correlations between exogenous data and target data. Besides, the inherent temporal dynamics of exogenous data are also related to the target series prediction, and thus should be considered as well. To address these issues, we propose an end-to-end deep learning model, i.e., Hierarchical attention-based Recurrent Highway Network (HRHN), which incorporates spatio-temporal feature extraction of exogenous variables and temporal dynamics modeling of target variables into a single framework. Moreover, by introducing the hierarchical attention mechanism, HRHN can adaptively select the relevant exogenous features in different semantic levels. We carry out comprehensive empirical evaluations with various methods over several datasets, and show that HRHN outperforms the state of the arts in time series prediction, especially in capturing sudden changes and sudden oscillations of time series.",
"title": ""
},
{
"docid": "3117a335e4324b151f25d0d3b4279b3c",
"text": "Finding more effective solution and tools for complicated managerial problems is one of the most important and dominant subjects in management studies. With the advancement of computer and communication technology, the tools that are using for management decisions have undergone a massive change. Artificial Neural Networks (ANNs) are one of these tools that have become a critical component of business intelligence. In this article we describe the basic of neural networks as well as a review of selected works done in application of ANNs in management sciences.",
"title": ""
},
{
"docid": "366f31829bb1ac55d195acef880c488e",
"text": "Intense competition among a vast number of group-buying websites leads to higher product homogeneity, which allows customers to switch to alternative websites easily and reduce their website stickiness and loyalty. This study explores the antecedents of user stickiness and loyalty and their effects on consumers’ group-buying repurchase intention. Results indicate that systems quality, information quality, service quality, and alternative system quality each has a positive relationship with user loyalty through user stickiness. Meanwhile, information quality directly impacts user loyalty. Thereafter, user stickiness and loyalty each has a positive relationship with consumers’ repurchase intention. Theoretical and managerial implications are also discussed.",
"title": ""
},
{
"docid": "bcbcb23a0681ef063a37b94ccc26b00c",
"text": "Race and racism persist online in ways that are both new and unique to the Internet, alongside vestiges of centuries-old forms that reverberate significantly both offline and on. As we mark 15 years into the field of Internet studies, it becomes necessary to assess what the extant research tells us about race and racism. This paper provides an analysis of the literature on race and racism in Internet studies in the broad areas of (1) race and the structure of the Internet, (2) race and racism matters in what we do online, and (3) race, social control and Internet law. Then, drawing on a range of theoretical perspectives, including Hall’s spectacle of the Other and DuBois’s view of white culture, the paper offers an analysis and critique of the field, in particular the use of racial formation theory. Finally, the paper points to the need for a critical understanding of whiteness in Internet studies.",
"title": ""
},
{
"docid": "b3ddcc6dbe3e118dfd0630feb42713c9",
"text": "This thesis details the use of a programmable logic device to increase the playing strength of a chess program. The time–consuming task of generating chess moves is relegated to hardware in order to increase the processing speed of the search algorithm. A simpler inter–square connection protocol reduces the number of wires between chess squares, when compared to the Deep Blue design. With this interconnection scheme, special chess moves are easily resolved. Furthermore, dynamically programmable arbiters are introduced for optimal move ordering. Arbiter centrality is also shown to improve move ordering, thereby creating smaller search trees. The move generator is designed to allow the integration of crucial move ordering heuristics. With its new hardware move generator, the chess program’s playing ability is noticeably improved.",
"title": ""
},
{
"docid": "d71c8d9f5fed873937d6a645f17c9b47",
"text": "Yang, C.-C., Prasher, S.O., Landry, J.-A., Perret, J. and Ramaswamy, H.S. 2000. Recognition of weeds with image processing and their use with fuzzy logic for precision farming. Can. Agric. Eng. 42:195200. Herbicide use can be reduced if the spatial distribution of weeds in the field is taken into account. This paper reports the initial stages of development of an image capture/processing system to detect weeds, as well as a fuzzy logic decision-making system to determine where and how much herbicide to apply in an agricultural field. The system used a commercially available digital camera and a personal computer. In the image processing stage, green objects in each image were identified using a greenness method that compared the red, green, and blue (RGB) intensities. The RGB matrix was reduced to a binary form by applying the following criterion: if the green intensity of a pixel was greater than the red and the blue intensities, then the pixel was assigned a value of one; otherwise the pixel was given a value of zero. The resulting binary matrix was used to compute greenness area for weed coverage, and greenness distribution of weeds (weed patch). The values of weed coverage and weed patch were inputs to the fuzzy logic decision-making system, which used the membership functions to control the herbicide application rate at each location. Simulations showed that a graduated fuzzy strategy could potentially reduce herbicide application by 5 to 24%, and that an on/off strategy resulted in an even greater reduction of 15 to 64%.",
"title": ""
},
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "70f103f9d30ab31527d3a422bf4ca490",
"text": "Early study tries to use chatbot for counseling services. They changed drinking habit of who being consulted by leading them via intervene chatbot. However, the application did not concerned about psychiatric status through continuous conversation with user monitoring. Furthermore, they had no ethical judgment method that about the intervention of the chatbot. We argue that more reasonable and continuous emotion recognition will make better mental healthcare experiment. It will be more proper clinical psychiatric consolation in ethical view as well. This paper suggests a introduce a novel chatbot system for psychiatric counseling service. Our system understands content of conversation based on recent natural language processing (NLP) methods with emotion recognition. It senses emotional flow through the continuous observation of conversation. Also, we generate personalized counseling response from user input, to do this, we use additional constrains to generation model for the proper response generation which can detect conversational context, user emotion and expected reaction.",
"title": ""
},
{
"docid": "8de25881e8a5f12f891656f271c44d4d",
"text": "Forest fires play a critical role in landscape transformation, vegetation succession, soil degradation and air quality. Improvements in fire risk estimation are vital to reduce the negative impacts of fire, either by lessen burn severity or intensity through fuel management, or by aiding the natural vegetation recovery using post-fire treatments. This paper presents the methods to generate the input variables and the risk integration developed within the Firemap project (funded under the Spanish Ministry of Science and Technology) to map wildland fire risk for several regions of Spain. After defining the conceptual scheme for fire risk assessment, the paper describes the methods used to generate the risk parameters, and presents",
"title": ""
},
{
"docid": "87c973e92ef3affcff4dac0d0183067c",
"text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.",
"title": ""
},
{
"docid": "2ecb4d841ef57a3acdf05cbb727aecbf",
"text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.",
"title": ""
}
] |
scidocsrr
|
9eb19a8c3d5db33324b7c2bacf136455
|
End-to-End Instance Segmentation with Recurrent Attention
|
[
{
"docid": "7a9b9633243d84978d9e975744642e18",
"text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].",
"title": ""
},
{
"docid": "df9acaed8dbcfbd38a30e4e1fa77aa8a",
"text": "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
"title": ""
},
{
"docid": "f5e59d92c2a3d810f1b0b9b92efcdd1e",
"text": "In this work, we propose a novel Reversible Recursive Instance-level Object Segmentation (R2-IOS) framework to address the challenging instance-level object segmentation task. R2-IOS consists of a reversible proposal refinement sub-network that predicts bounding box offsets for refining the object proposal locations, and an instance-level segmentation sub-network that generates the foreground mask of the dominant object instance in each proposal. By being recursive, R2-IOS iteratively optimizes the two subnetworks during joint training, in which the refined object proposals and improved segmentation predictions are alternately fed into each other to progressively increase the network capabilities. By being reversible, the proposal refinement sub-network adaptively determines an optimal number of refinement iterations required for each proposal during both training and testing. Furthermore, to handle multiple overlapped instances within a proposal, an instance-aware denoising autoencoder is introduced into the segmentation sub-network to distinguish the dominant object from other distracting instances. Extensive experiments on the challenging PASCAL VOC 2012 benchmark well demonstrate the superiority of R2-IOS over other state-of-the-art methods. In particular, the APr over 20 classes at 0:5 IoU achieves 66:7%, which significantly outperforms the results of 58:7% by PFN [17] and 46:3% by [22].",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
}
] |
[
{
"docid": "7489989ecaa16bc699949608f9ffc8a1",
"text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2ab2280b7821ae6ad27fff995fd36fe0",
"text": "Recent years have seen the development of a satellite communication system called a high-throughput satellite (HTS), which enables large-capacity communication to cope with various communication demands. Current HTSs have a fixed allocation of communication resources and cannot flexibly change this allocation during operation. Thus, effectively allocating communication resources for communication demands with a bias is not possible. Therefore, technology is being developed to add flexibility to satellite communication systems, but there is no system analysis model available to quantitatively evaluate the flexibility performance. In this study, we constructed a system analysis model to quantitatively evaluate the flexibility of a satellite communication system and used it to analyze a satellite communication system equipped with a digital channelizer.",
"title": ""
},
{
"docid": "c8948a93e138ca0ac8cae3247dc9c81a",
"text": "Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "eb344bf180467ccbd27d0aff2c57be73",
"text": "Most IP-geolocation mapping schemes [14], [16], [17], [18] take delay-measurement approach, based on the assumption of a strong correlation between networking delay and geographical distance between the targeted client and the landmarks. In this paper, however, we investigate a large region of moderately connected Internet and find the delay-distance correlation is weak. But we discover a more probable rule - with high probability the shortest delay comes from the closest distance. Based on this closest-shortest rule, we develop a simple and novel IP-geolocation mapping scheme for moderately connected Internet regions, called GeoGet. In GeoGet, we take a large number of webservers as passive landmarks and map a targeted client to the geolocation of the landmark that has the shortest delay. We further use JavaScript at targeted clients to generate HTTP/Get probing for delay measurement. To control the measurement cost, we adopt a multistep probing method to refine the geolocation of a targeted client, finally to city level. The evaluation results show that when probing about 100 landmarks, GeoGet correctly maps 35.4 percent clients to city level, which outperforms current schemes such as GeoLim [16] and GeoPing [14] by 270 and 239 percent, respectively, and the median error distance in GeoGet is around 120 km, outperforming GeoLim and GeoPing by 37 and 70 percent, respectively.",
"title": ""
},
{
"docid": "1a17bd33f1bd57966cca13a82861a335",
"text": "The process of photosynthesis is initiated by the capture of sunlight by a network of light-absorbing molecules (chromophores), which are also responsible for the subsequent funneling of the excitation energy to the reaction centers. Through evolution, genetic drift, and speciation, photosynthetic organisms have discovered many solutions for light harvesting. In this review, we describe the underlying photophysical principles by which this energy is absorbed, as well as the mechanisms of electronic excitation energy transfer (EET). First, optical properties of the individual pigment chromophores present in light-harvesting antenna complexes are introduced, and then we examine the collective behavior of pigment-pigment and pigment-protein interactions. The description of energy transfer, in particular multichromophoric antenna structures, is shown to vary depending on the spatial and energetic landscape, which dictates the relative coupling strength between constituent pigment molecules. In the latter half of the article, we focus on the light-harvesting complexes of purple bacteria as a model to illustrate the present understanding of the synergetic effects leading to EET optimization of light-harvesting antenna systems while exploring the structure and function of the integral chromophores. We end this review with a brief overview of the energy-transfer dynamics and pathways in the light-harvesting antennas of various photosynthetic organisms.",
"title": ""
},
{
"docid": "85aa1bc572171c85b1c01898960e2779",
"text": "The classification of breast masses from mammograms into benign or malignant has been commonly addressed with machine learning classifiers that use as input a large set of hand-crafted features, usually based on general geometrical and texture information. In this paper, we propose a novel deep learning method that automatically learns features based directly on the optmisation of breast mass classification from mammograms, where we target an improved classification performance compared to the approach described above. The novelty of our approach lies in the two-step training process that involves a pre-training based on the learning of a regressor that estimates the values of a large set of handcrafted features, followed by a fine-tuning stage that learns the breast mass classifier. Using the publicly available INbreast dataset, we show that the proposed method produces better classification results, compared with the machine learning model using hand-crafted features and with deep learning method trained directly for the classification stage without the pre-training stage. We also show that the proposed method produces the current state-of-the-art breast mass classification results for the INbreast dataset. Finally, we integrate the proposed classifier into a fully automated breast mass detection and segmentation, which shows promising results.",
"title": ""
},
{
"docid": "1566c80c4624533292c7442c61f3be15",
"text": "Modern software often relies on the combination of several software modules that are developed independently. There are use cases where different software libraries from different programming languages are used, e.g., embedding DLL files in JAVA applications. Even more complex is the case when different programming paradigms are combined like within applications with database connections, for instance PHP and SQL. Such a diversification of programming languages and modules in just one software application is becoming more and more important, as this leads to a combination of the strengths of different programming paradigms. But not always, the developers are experts in the different programming languages or even in different programming paradigms. So, it is desirable to provide easy to use interfaces that enable the integration of programs from different programming languages and offer access to different programming paradigms. In this paper we introduce a connector architecture for two programming languages of different paradigms: JAVA as a representative of object oriented programming languages and PROLOG for logic programming. Our approach provides a fast, portable and easy to use communication layer between JAVA and PROLOG. The exchange of information is done via a textual term representation which can be used independently from a deployed PROLOG engine. The proposed connector architecture allows for Object Unification on the JAVA side. We provide an exemplary connector for JAVA and SWI-PROLOG, a well-known PROLOG implementation.",
"title": ""
},
{
"docid": "fbb5a86992438d630585462f8626e13f",
"text": "As a basic task in computer vision, semantic segmentation can provide fundamental information for object detection and instance segmentation to help the artificial intelligence better understand real world. Since the proposal of fully convolutional neural network (FCNN), it has been widely used in semantic segmentation because of its high accuracy of pixel-wise classification as well as high precision of localization. In this paper, we apply several famous FCNN to brain tumor segmentation, making comparisons and adjusting network architectures to achieve better performance measured by metrics such as precision, recall, mean of intersection of union (mIoU) and dice score coefficient (DSC). The adjustments to the classic FCNN include adding more connections between convolutional layers, enlarging decoders after up sample layers and changing the way shallower layers’ information is reused. Besides the structure modification, we also propose a new classifier with a hierarchical dice loss. Inspired by the containing relationship between classes, the loss function converts multiple classification to multiple binary classification in order to counteract the negative effect caused by imbalance data set. Massive experiments have been done on the training set and testing set in order to assess our refined fully convolutional neural networks and new types of loss function. Competitive figures prove they are more effective than their predecessors.",
"title": ""
},
{
"docid": "d1525fdab295a16d5610210e80fb8104",
"text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.",
"title": ""
},
{
"docid": "6b73282f0f99fc58f6d351e53c7521ae",
"text": "BACKGROUND\nBicycle theft is a serious problem in many countries, and there is a lack of evidence concerning effective prevention strategies. Displaying images of 'watching eyes' has been shown to make people behave in more socially desirable ways in a number of settings, but it is not yet clear if this effect can be exploited for purposes of crime prevention. We report the results of a simple intervention on a university campus where signs featuring watching eyes and a related verbal message were displayed above bicycle racks.\n\n\nMETHODOLOGY AND PRINCIPAL FINDINGS\nWe installed durable signs at three locations which had experienced high levels of bicycle theft, and used the rest of the university campus as a control location. Reported thefts were monitored for 12 months before and after the intervention. Bicycle thefts decreased by 62% at the experimental locations, but increased by 65% in the control locations, suggesting that the signs were effective, but displaced offending to locations with no signs. The Odds Ratio for the effect of the intervention was 4.28 (95% confidence interval 2.04-8.98), a large effect compared to other place-based crime prevention interventions.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThe effectiveness of this extremely cheap and simple intervention suggests that there can be considerable crime-reduction benefits to engaging the psychology of surveillance, even in the absence of surveillance itself. Simple interventions for high-crime locations based on this principle should be considered as an adjunct to other measures, although a possible negative consequence is displacement of offending.",
"title": ""
},
{
"docid": "8a0cc5438a082ed9afd28ad8ed272034",
"text": "Researchers analyzed 23 blockchain implementation projects, each tracked for design decisions and architectural alignment showing benefits, detriments, or no effects from blockchain use. The results provide the basis for a framework that lets engineers, architects, investors, and project leaders evaluate blockchain technology’s suitability for a given application. This analysis also led to an understanding of why some domains are inherently problematic for blockchains. Blockchains can be used to solve some trust-based problems but aren’t always the best or optimal technology. Some problems that can be solved using them can also be solved using simpler methods that don’t necessitate as big an investment.",
"title": ""
},
{
"docid": "a8f5f7c147c1ac8cabf86d4809aa3f65",
"text": "Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest.",
"title": ""
},
{
"docid": "f022a506d51f58a53ba03342a301a221",
"text": "Irregular scene text such as curved, rotated or perspective texts commonly appear in natural scene images due to different camera view points, special design purposes etc. In this work, we propose a text salience map guided model to recognize these arbitrary direction scene texts. We train a deep Fully Convolutional Network (FCN) to calculate the precise salience map for texts. Then we estimate the positions and rotations of the text and utilize this information to guide the generation of CNN sequence features. Finally the sequence is recognized with a Recurrent Neural Network (RNN) model. Experiments on various public datasets show that the proposed approach is robust to different distortions and performs superior or comparable to the state-of-the-art techniques.",
"title": ""
},
{
"docid": "b13fa98311719f107b45e8d6840497f1",
"text": "Social Networks allow users to self-present by sharing personal contents with others which may add comments. Recent studies highlighted how the emotions expressed in a post affect others’ posts, eliciting a congruent emotion. So far, no studies have yet investigated the emotional coherence between wall posts and its comments. This research evaluated posts and comments mood of Facebook profiles, analyzing their linguistic features, and a measure to assess an excessive self-presentation was introduced. Two new experimental measures were built, describing the emotional loading (positive and negative) of posts and comments, and the mood correspondence between them was evaluated. The profiles ”empathy”, the mood coherence between post and comments, was used to investigate the relation between an excessive self-presentation and the emotional coherence of a profile. Participants publish a higher average number of posts with positive mood. To publish an emotional post corresponds to get more likes, comments and receive a coherent mood of comments, confirming the emotional contagion effect reported in literature. Finally, the more empathetic profiles are characterized by an excessive self-presentation, having more posts, and receiving more comments and likes. To publish emotional contents appears to be functional to receive more comments and likes, fulfilling needs of attention-seeking.",
"title": ""
},
{
"docid": "8c3639614a66f1ec04d7a57b51377124",
"text": "Scene text extraction methodologies are usually based in classification of individual regions or patches, using a priori knowledge for a given script or language. Human perception of text, on the other hand, is based on perceptual organisation through which text emerges as a perceptually significant group of atomic objects. Therefore humans are able to detect text even in languages and scripts never seen before. In this paper, we argue that the text extraction problem could be posed as the detection of meaningful groups of regions. We present a method built around a perceptual organisation framework that exploits collaboration of proximity and similarity laws to create text-group hypotheses. Experiments demonstrate that our algorithm is competitive with state of the art approaches on a standard dataset covering text in variable orientations and two languages.",
"title": ""
},
{
"docid": "756929d22f107a5ff0b3bf0b19414a06",
"text": "Users of social networking sites such as Facebook frequently post self-portraits on their profiles. While research has begun to analyze the motivations for posting such pictures, less is known about how selfies are evaluated by recipients. Although producers of selfies typically aim to create a positive impression, selfies may also be regarded as narcissistic and therefore fail to achieve the intended goal. The aim of this study is to examine the potentially ambivalent reception of selfies compared to photos taken by others based on the Brunswik lens model Brunswik (1956). In a between-subjects online experiment (N = 297), Facebook profile mockups were shown which differed with regard to picture type (selfie vs. photo taken by others), gender of the profile owner (female vs. male), and number of individuals within a picture (single person vs. group). Results revealed that selfies were indeed evaluated more negatively than photos taken by others. Persons in selfies were rated as less trustworthy, less socially attractive, less open to new experiences, more narcissistic and more extroverted than the same persons in photos taken by others. In addition, gender differences were observed in the perception of pictures. Male profile owners were rated as more narcissistic and less trustworthy than female profile owners, but there was no significant interaction effect of type of picture and gender. Moreover, a mediation analysis of presumed motives for posting selfies revealed that negative evaluations of selfie posting individuals were mainly driven by the perceived motivation of impression management. Findings suggest that selfies are likely to be evaluated less positively than producers of selfies might suppose.",
"title": ""
},
{
"docid": "9fe0ab3484b58d9902e8862a8f2556ad",
"text": "For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather/illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: 1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and 2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: 1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and 2) route planners help the driving task significantly, especially for steering angle prediction. Code, data and more visual results will be made available at http://www.vision.ee.ethz.ch/ ̃heckers/Drive360.",
"title": ""
},
{
"docid": "9accdf3edad1e9714282e58758d3c382",
"text": "We present initial results from and quantitative analysis of two leading open source hypervisors, Xen and KVM. This study focuses on the overall performance, performance isolation, and scalability of virtual machines running on these hypervisors. Our comparison was carried out using a benchmark suite that we developed to make the results easily repeatable. Our goals are to understand how the different architectural decisions taken by different hypervisor developers affect the resulting hypervisors, to help hypervisor developers realize areas of improvement for their hypervisors, and to help users make informed decisions about their choice of hypervisor.",
"title": ""
},
{
"docid": "85b95ad66c0492661455281177004b9e",
"text": "Although relatively small in size and power output, automotive accessory motors play a vital role in improving such critical vehicle characteristics as drivability, comfort, and, most importantly, fuel economy. This paper describes a design method and experimental verification of a novel technique for torque ripple reduction in stator claw-pole permanent-magnet (PM) machines, which are a promising technology prospect for automotive accessory motors.",
"title": ""
}
] |
scidocsrr
|
cecdaa7ef303e06843ba9e8641f59d09
|
A space-efficient parallel algorithm for computing betweenness centrality in distributed memory
|
[
{
"docid": "666b9e88e881bbaa70037ba6f2548acf",
"text": "Since the early 1990s, there has been a significant research activity in efficient parallel algorithms and novel computer architectures for problems that have been already solved sequentially (sorting, maximum flow, searching, etc). In this handout, we are interested in parallel algorithms and we avoid particular hardware details. The primary architectural model for our algorithms is a simplified machine called Parallel RAM (or PRAM). In essence, the PRAM model consists of a number p of processors that can read and/or write on a shared “global” memory in parallel (i.e., at the same time). The processors can also perform various arithmetic and logical operations in parallel.",
"title": ""
}
] |
[
{
"docid": "1104928cf56f0f1f582279abd4c2c0df",
"text": "Research on when and how to use three-dimensional (3D) perspective views on flat screens for operational tasks such as air traffic control is complex. We propose a functional distinction between tasks: those that require shape understanding versus those that require precise judgments of relative position. The distortions inherent in 3D displays hamper judging relative positions, whereas the integration of dimensions in 3D displays facilitates shape understanding. We confirmed these hypotheses with two initial experiments involving simple block shapes. The shape-understanding tasks were identification or mental rotation. The relative-position tasks were locating shadows and determining directions and distances between objects. We then extended the results to four experiments involving complex natural terrain. We compare our distinction with the integral/separable task distinction of Haskel and Wickens (1993). Applications for this research include displays for air traffic control, geoplots for military command and control, and potentially, any display of 3D information.",
"title": ""
},
{
"docid": "1262ce9e36e4208a1d8e641e5078e083",
"text": "D its fundamental role in legitimizing the modern state system, nationalism has rarely been linked to the outbreak of political violence in the recent literature on ethnic conflict and civil war. to a large extent, this is because the state is absent from many conventional theories of ethnic conflict. indeed, some studies analyze conflict between ethnic groups under conditions of state failure, thus making the absence of the state the very core of the causal argument. others assume that the state is ethnically neutral and try to relate ethnodemographic measures, such as fractionalization and polarization, to civil war. in contrast to these approaches, we analyze the state as an institution that is captured to different degrees by representatives of particular ethnic communities, and thus we conceive of ethnic wars as the result of competing ethnonationalist claims to state power. While our work relates to a rich research tradition that links the causes of such conflicts to the mobilization of ethnic minorities, it also goes beyond this tradition by introducing a new data set that addresses some of the shortcomings of this tradition. our analysis is based on the Ethnic power relations data set (epr), which covers all politically relevant ethnic groups and their access to power around the world from 1946 through 2005. this data set improves significantly on the widely used minorities at risk data set, which restricts its sample to mobilized",
"title": ""
},
{
"docid": "f327ed315be7d47b9f63dd9498999ae4",
"text": "In this paper we propose a deep architecture for detecting people attributes (e.g. gender, race, clothing …) in surveillance contexts. Our proposal explicitly deal with poor resolution and occlusion issues that often occur in surveillance footages by enhancing the images by means of Deep Convolutional Generative Adversarial Networks (DCGAN). Experiments show that by combining both our Generative Reconstruction and Deep Attribute Classification Network we can effectively extract attributes even when resolution is poor and in presence of strong occlusions up to 80% of the whole person figure.",
"title": ""
},
{
"docid": "481f4a4b14d4594d8b023f9df074dfeb",
"text": "We describe a method to produce a network where current methods such as DeepFool have great difficulty producing adversarial samples. Our construction suggests some insights into how deep networks work. We provide a reasonable analyses that our construction is difficult to defeat, and show experimentally that our method is hard to defeat with both Type I and Type II attacks using several standard networks and datasets. This SafetyNet architecture is used to an important and novel application SceneProof, which can reliably detect whether an image is a picture of a real scene or not. SceneProof applies to images captured with depth maps (RGBD images) and checks if a pair of image and depth map is consistent. It relies on the relative difficulty of producing naturalistic depth maps for images in post processing. We demonstrate that our SafetyNet is robust to adversarial examples built from currently known attacking approaches.",
"title": ""
},
{
"docid": "0c3ba78197c6d0f605b3b54149908705",
"text": "A novel design of solid phase microextraction fiber containing carbon nanotube reinforced sol-gel which was protected by polypropylene hollow fiber (HF-SPME) was developed for pre-concentration and determination of BTEX in environmental waste water and human hair samples. The method validation was included and satisfying results with high pre-concentration factors were obtained. In the present study orthogonal array experimental design (OAD) procedure with OA(16) (4(4)) matrix was applied to study the effect of four factors influencing the HF-SPME method efficiency: stirring speed, volume of adsorption organic solvent, extraction and desorption time of the sample solution, by which the effect of each factor was estimated using individual contributions as response functions in the screening process. Analysis of variance (ANOVA) was employed for estimating the main significant factors and their percentage contributions in extraction. Calibration curves were plotted using ten spiking levels of BTEX in the concentration ranges of 0.02-30,000ng/mL with correlation coefficients (r) 0.989-0.9991 for analytes. Under the optimized extraction conditions, the method showed good linearity (0.3-20,000ng/L), repeatability, low limits of detections (0.49-0.7ng/L) and excellent pre-concentration factors (185-1872). The best conditions which were estimated then applied for the analysis of BTEX compounds in the real samples.",
"title": ""
},
{
"docid": "9516d06751aa51edb0b0a3e2b75e0bde",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "0e8ab182a2ad85d19d9384de0ac5f359",
"text": "Nowadays, many applications need data modeling facilities for the description of complex objects with spatial and/or temporal facilities. Responses to such requirements may be found in Geographic Information Systems (GIS), in some DBMS, or in the research literature. However, most f existing models cover only partly the requirements (they address either spatial or temporal modeling), and most are at the logical level, h nce not well suited for database design. This paper proposes a spatiotemporal modeling approach at the conceptual level, called MADS. The proposal stems from the identification of the criteria to be met for a conceptual model. It is advocated that orthogonality is the key issue for achieving a powerful and intuitive conceptual model. Thus, the proposal focuses on highlighting similarities in the modeling of space and time, which enhance readability and understandability of the model.",
"title": ""
},
{
"docid": "dc259f1208eac95817d067b9cd13fa7c",
"text": "This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the structure of the input noise distribution by constructing tensors with different types of dimensions. We call this technique Periodic Spatial GAN (PSGAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. In addition, we can also accurately learn periodical textures. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources. Our method is highly scalable and it can generate output images of arbitrary large size.",
"title": ""
},
{
"docid": "052eb9b25a2efa0c79b65c32c48c7d03",
"text": "The advent of high-resolution digital cameras and sophisticated multi-view stereo algorithms offers the promise of unprecedented geometric fidelity in image-based modeling tasks, but it also puts unprecedented demands on camera calibration to fulfill these promises. This paper presents a novel approach to camera calibration where top-down information from rough camera parameter estimates and the output of a multi-view-stereo system on scaled-down input images is used to effectively guide the search for additional image correspondences and significantly improve camera calibration parameters using a standard bundle adjustment algorithm (Lourakis and Argyros 2008). The proposed method has been tested on six real datasets including objects without salient features for which image correspondences cannot be found in a purely bottom-up fashion, and objects with high curvature and thin structures that are lost in visual hull construction even with small errors in camera parameters. Three different methods have been used to qualitatively assess the improvements of the camera parameters. The implementation of the proposed algorithm is publicly available at Furukawa and Ponce (2008b).",
"title": ""
},
{
"docid": "18b2600e3984762c808544f5ec9320fd",
"text": "Babesiosis is a disease with a world-wide distribution affecting many species of mammals principally cattle and man. The major impact occurs in the cattle industry where bovine babesiosis has had a huge economic effect due to loss of meat and beef production of infected animals and death. Nowadays to those costs there must be added the high cost of tick control, disease detection, prevention and treatment. In almost a century and a quarter since the first report of the disease, the truth is: there is no a safe and efficient vaccine available, there are limited chemotherapeutic choices and few low-cost, reliable and fast detection methods. Detection and treatment of babesiosis are important tools to control babesiosis. Microscopy detection methods are still the cheapest and fastest methods used to identify Babesia parasites although their sensitivity and specificity are limited. Newer immunological methods are being developed and they offer faster, more sensitive and more specific options to conventional methods, although the direct immunological diagnoses of parasite antigens in host tissues are still missing. Detection methods based on nucleic acid identification and their amplification are the most sensitive and reliable techniques available today; importantly, most of those methodologies were developed before the genomics and bioinformatics era, which leaves ample room for optimization. For years, babesiosis treatment has been based on the use of very few drugs like imidocarb or diminazene aceturate. Recently, several pharmacological compounds were developed and evaluated, offering new options to control the disease. With the complete sequence of the Babesia bovis genome and the B. bigemina genome project in progress, the post-genomic era brings a new light on the development of diagnosis methods and new chemotherapy targets. In this review, we will present the current advances in detection and treatment of babesiosis in cattle and other animals, with additional reference to several apicomplexan parasites.",
"title": ""
},
{
"docid": "35859e09799b48f63b76bd0aed464f95",
"text": "The rapid adoption of mobile devices comes with the growing prevalence of mobile malware. Mobile malware poses serious threats to personal information and creates challenges in securing network. Traditional network services provide connectivity but do not have any direct mechanism for security protection. The emergence of Software-Defined Networking (SDN) provides a unique opportunity to achieve network security in a more efficient and flexible manner. In this paper, we analyze the behaviors of mobile malware, propose several mobile malware detection algorithms, and design and implement a malware detection system using SDN. Our system detects mobile malware by identifying suspicious network activities through real-time traffic analysis, which only requires connection establishment packets. Specifically, our detection algorithms are implemented as modules inside the OpenFlow controller, and the security rules can be imposed in real time. We have tested our system prototype using both a local testbed and GENI infrastructure. Test results confirm the feasibility of our approach. In addition, the stress testing results show that even unoptimized implementations of our algorithms do not affect the performance of the OpenFlow controller significantly.",
"title": ""
},
{
"docid": "024570b927c0967bf0c2868c36fc16d6",
"text": "Cognitive training has been shown to improve executive functions (EFs) in middle childhood and adulthood. However, fewer studies have targeted the preschool years-a time when EFs undergo rapid development. The present study tested the effects of a short four session EF training program in 54 four-year-olds. The training group significantly improved their working memory from pre-training relative to an active control group. Notably, this effect extended to a task sharing few surface features with the trained tasks, and continued to be apparent 3 months later. In addition, the benefits of training extended to a measure of mathematical reasoning 3 months later, indicating that training EFs during the preschool years has the potential to convey benefits that are both long-lasting and wide-ranging.",
"title": ""
},
{
"docid": "88dea71422ca32235579e03bf66a3e07",
"text": "Compared to truly negative cultures, false-positive blood cultures not only increase laboratory work but also prolong lengths of patient stay and use of broad-spectrum antibiotics, both of which are likely to increase antibiotic resistance and patient morbidity. The increased patient suffering and surplus costs caused by blood culture contamination motivate substantial measures to decrease the rate of contamination, including the use of dedicated phlebotomy teams. The present study evaluated the effect of a simple informational intervention aimed at reducing blood culture contamination at Skåne University Hospital (SUS), Malmö, Sweden, during 3.5 months, focusing on departments collecting many blood cultures. The main examined outcomes of the study were pre- and postintervention contamination rates, analyzed with a multivariate logistic regression model adjusting for relevant determinants of contamination. A total of 51,264 blood culture sets were drawn from 14,826 patients during the study period (January 2006 to December 2009). The blood culture contamination rate preintervention was 2.59% and decreased to 2.23% postintervention (odds ratio, 0.86; 95% confidence interval, 0.76 to 0.98). A similar decrease in relevant bacterial isolates was not found postintervention. Contamination rates at three auxiliary hospitals did not decrease during the same period. The effect of the intervention on phlebotomists' knowledge of blood culture routines was also evaluated, with a clear increase in level of knowledge among interviewed phlebotomists postintervention. The present study shows that a relatively simple informational intervention can have significant effects on the level of contaminated blood cultures, even in a setting with low rates of contamination where nurses and auxiliary nurses conduct phlebotomies.",
"title": ""
},
{
"docid": "dc13ed2e2860cd0617345d80dea79a75",
"text": "Superparamagnetic iron oxide nanoparticles (SPION) with appropriate surface chemistry have been widely used experimentally for numerous in vivo applications such as magnetic resonance imaging contrast enhancement, tissue repair, immunoassay, detoxification of biological fluids, hyperthermia, drug delivery and in cell separation, etc. All these biomedical and bioengineering applications require that these nanoparticles have high magnetization values and size smaller than 100 nm with overall narrow particle size distribution, so that the particles have uniform physical and chemical properties. In addition, these applications need special surface coating of the magnetic particles, which has to be not only non-toxic and biocompatible but also allow a targetable delivery with particle localization in a specific area. To this end, most work in this field has been done in improving the biocompatibility of the materials, but only a few scientific investigations and developments have been carried out in improving the quality of magnetic particles, their size distribution, their shape and surface in addition to characterizing them to get a protocol for the quality control of these particles. Nature of surface coatings and their subsequent geometric arrangement on the nanoparticles determine not only the overall size of the colloid but also play a significant role in biokinetics and biodistribution of nanoparticles in the body. The types of specific coating, or derivatization, for these nanoparticles depend on the end application and should be chosen by keeping a particular application in mind, whether it be aimed at inflammation response or anti-cancer agents. Magnetic nanoparticles can bind to drugs, proteins, enzymes, antibodies, or nucleotides and can be directed to an organ, tissue, or tumour using an external magnetic field or can be heated in alternating magnetic fields for use in hyperthermia. This review discusses the synthetic chemistry, fluid stabilization and surface modification of superparamagnetic iron oxide nanoparticles, as well as their use for above biomedical applications.",
"title": ""
},
{
"docid": "f9dc4cfb42a5ec893f5819e03c64d4bc",
"text": "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.,,To better capture the structure dependency of human body joints, the generator G is designed in a stacked multi-task manner to predict poses as well as occlusion heatmaps. Then, the pose and occlusion heatmaps are sent to the discriminators to predict the likelihood of the pose being real. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on two widely used human pose estimation benchmark datasets. Our approach significantly outperforms the state-of-the-art methods and almost always generates plausible human pose predictions.",
"title": ""
},
{
"docid": "fb001e2fd9f2f25eb3d9a4ced27a12be",
"text": "Simulation is an appealing option for validating the safety of autonomous vehicles. Generative Adversarial Imitation Learning (GAIL) has recently been shown to learn representative human driver models. These human driver models were learned through training in single-agent environments, but they have difficulty in generalizing to multi-agent driving scenarios. We argue these difficulties arise because observations at training and test time are sampled from different distributions. This difference makes such models unsuitable for the simulation of driving scenes, where multiple agents must interact realistically over long time horizons. We extend GAIL to address these shortcomings through a parameter-sharing approach grounded in curriculum learning. Compared with single-agent GAIL policies, policies generated by our PS-GAIL method prove superior at interacting stably in a multi-agent setting and capturing the emergent behavior of human drivers.",
"title": ""
},
{
"docid": "866e7819b0389f26daab015c6ff40b69",
"text": "This study examined the effects of multiple risk, promotive, and protective factors on three achievement-related measures (i.e., grade point average, number of absences, and math achievement test scores) for African American 7th-grade students (n = 837). There were 3 main findings. First, adolescents had lower grade point averages, more absences, and lower achievement test scores as their exposure to risk factors increased. Second, different promotive and protective factors emerged as significant contributors depending on the nature of the achievement-related outcome that was being assessed. Third, protective factors were identified whose effects were magnified in the presence of multiple risks. Results were discussed in light of the developmental tasks facing adolescents and the contexts in which youth exposed to multiple risks and their families live.",
"title": ""
},
{
"docid": "525ddfaae4403392e8817986f2680a68",
"text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.",
"title": ""
},
{
"docid": "e18ddc1b569a6f39ee5cbf133738a2a1",
"text": "Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary— a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout’s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.",
"title": ""
},
{
"docid": "21502c42ef7a8e342334b93b1b5069d6",
"text": "Motivations to engage in retail online shopping can include both utilitarian and hedonic shopping dimensions. To cater to these consumers, online retailers can create a cognitively and esthetically rich shopping environment, through sophisticated levels of interactive web utilities and features, offering not only utilitarian benefits and attributes but also providing hedonic benefits of enjoyment. Since the effect of interactive websites has proven to stimulate online consumer’s perceptions, this study presumes that websites with multimedia rich interactive utilities and features can influence online consumers’ shopping motivations and entice them to modify or even transform their original shopping predispositions by providing them with attractive and enhanced interactive features and controls, thus generating a positive attitude towards products and services offered by the retailer. This study seeks to explore the effects of Web interactivity on online consumer behavior through an attitudinal model of technology acceptance.",
"title": ""
}
] |
scidocsrr
|
e65b8a7bc1bbfe638311b5899d720555
|
Synchronous Generator Brushless Field Excitation and Voltage Regulation via Capacitive Coupling Through Journal Bearings
|
[
{
"docid": "4a284d2c47d385d24586c7c7de83dc1e",
"text": "Capacitive power transfer (CPT) systems have up to date been used for very low power delivery due to a number of limitations. A fundamental treatment of the problem is carried out and a CPT system is presented that achieves many times higher power throughput into low-impedance loads than traditional systems with the same interface capacitance and frequency of operation and with reasonable ratings for the switching devices. The development and analysis of the system is based well on the parameters of the capacitive interface and a design procedure is provided. The validity of the concept has been verified by an experimental CPT system that delivered more than 25W through a combined interface capacitance of 100 pF, at an operating frequency of only 1 MHz, with efficiency exceeding 80%.",
"title": ""
}
] |
[
{
"docid": "50ebb851bb0fceeddd39fdee66941e6c",
"text": "Machine learning involves optimizing a loss function on unlabeled data points given examples of labeled data points, where the loss function measures the performance of a learning algorithm. We give an overview of techniques, called reductions, for converting a problem of minimizing one loss function into a problem of minimizing another, simpler loss function. This tutorial discusses how to create robust reductions that perform well in practice. The reductions discussed here can be used to solve any supervised learning problem with a standard binary classification or regression algorithm available in any machine learning toolkit. We also discuss common design flaws in folklore reductions.",
"title": ""
},
{
"docid": "8a1ba356c34935a2f3a14656138f0414",
"text": "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.",
"title": ""
},
{
"docid": "62be3597e792abecc4afa44903edc9aa",
"text": "Digital forensic tools are being developed at a brisk pace in response to the ever increasing variety of forensic targets. Most tools are created for specific tasks – filesystem analysis, memory analysis, network analysis, etc. – and make little effort to interoperate with one another. This makes it difficult and extremely time-consuming for an investigator to build a wider view of the state of the system under investigation. In this work, we present FACE, a framework for automatic evidence discovery and correlation from a variety of forensic targets. Our prototype implementation demonstrates the integrated analysis and correlation of a disk image, memory image, network capture, and configuration log files. The results of this analysis are presented as a coherent view of the state of a target system, allowing investigators to quickly understand it. We also present an advanced open-source memory analysis tool, ramparser, for the automated analysis of Linux systems. a 2008 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df1db7eae960d3b16edb8d001b7b1f22",
"text": "This letter presents a novel approach for providing substrate-integrated waveguide tunable resonators by means of placing an additional metalized via-hole on the waveguide cavity. The via-hole contains an open-loop slot on the top metallic wall. The dimensions, position and orientation of the open-loop slot defines the tuning range. Fabrication of some designs reveals good agreement between simulation and measurements. Additionally, a preliminary prototype which sets the open-loop slot orientation manually is also presented, achieving a continuous tuning range of 8%.",
"title": ""
},
{
"docid": "20ce6bde3c15b63cad0a421282dbcdc6",
"text": "Baseline detection is still a challenging task for heterogeneous collections of historical documents. We present a novel approach to baseline extraction in such settings, turning out the winning entry to the ICDAR 2017 Competition on Baseline detection (cBAD). It utilizes deep convolutional nets (CNNs) for both, the actual extraction of baselines, as well as for a simple form of layout analysis in a pre-processing step. To the best of our knowledge it is the first CNN-based system for baseline extraction applying a U-net architecture and sliding window detection, profiting from a high local accuracy of the candidate lines extracted. Final baseline post-processing complements our approach, compensating for inaccuracies mainly due to missing context information during sliding window detection. We experimentally evaluate the components of our system individually on the cBAD dataset. Moreover, we investigate how it generalizes to different data by means of the dataset used for the baseline extraction task of the ICDAR 2017 Competition on Layout Analysis for Challenging Medieval Manuscripts (HisDoc). A comparison with the results reported for HisDoc shows that it also outperforms the contestants of the latter.",
"title": ""
},
{
"docid": "7c829563e98a6c75eb9b388bf0627271",
"text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.",
"title": ""
},
{
"docid": "653a2299cd8bc5cfb48e660390632911",
"text": "Recent studies indicate that several Toll-like receptors (TLRs) are implicated in recognizing viral structures and instigating immune responses against viral infections. The aim of this study is to examine the expression of TLRs and proinflammatory cytokines in viral skin diseases such as verruca vulgaris (VV) and molluscum contagiosum (MC). Reverse transcription-polymerase chain reaction and immunostaining of skin samples were performed to determine the expression of specific antiviral and proinflammatory cytokines as well as 5 TLRs (TLR2, 3, 4, 7, and 9). In normal human skin, TLR2, 4, and 7 mRNA was constitutively expressed, whereas little TLR3 and 9 mRNA was detected. Compared to normal skin (NS), TLR3 and 9 mRNA was clearly expressed in VV and MC specimens. Likewise, immunohistochemistry indicated that keratinocytes in NS constitutively expressed TLR2, 4, and 7; however, TLR3 was rarely detected and TLR9 was only weakly expressed, whereas 5 TLRs were all strongly expressed on the epidermal keratinocytes of VV and MC lesions. In addition, the mRNA expression of IFN-beta and TNF-alpha was upregulated in the VV and MC samples. Immunohistochemistry indicated that IFN-beta and TNF-alpha were predominantly localized in the granular layer in the VV lesions and adjacent to the MC bodies. Our results indicated that VV and MC skin lesions expressed TLR3 and 9 in addition to IFN-beta and TNF-alpha. These viral-induced proinflammatory cytokines may play a pivotal role in cutaneous innate immune responses.",
"title": ""
},
{
"docid": "3be0332ae074b81a41f09b115c201cb8",
"text": "Syphilis has several clinical manifestations, making laboratory testing a very important aspect of diagnosis. In North America, many unsuspected cases are discovered by laboratory testing. The etiological agent, Treponema pallidum, cannot be cultured, and there is no single optimal alternative test. Serological testing is the most frequently used approach in the laboratory diagnosis of syphilis. The present paper discusses the various serological and alternative tests currently available along with their limitations, and relates their results to the likely corresponding clinical stage of the disease. The need to use multiple tests is discussed, and the importance of quality control is noted. The complexity of syphilis serology means that the services of reference laboratories and clinical experts are often needed.",
"title": ""
},
{
"docid": "0cb34c6202328c57dbd1e8e7270d8aa6",
"text": "Optimization of deep learning is no longer an imminent problem, due to various gradient descent methods and the improvements of network structure, including activation functions, the connectivity style, and so on. Then the actual application depends on the generalization ability, which determines whether a network is effective. Regularization is an efficient way to improve the generalization ability of deep CNN, because it makes it possible to train more complex models while maintaining a lower overfitting. In this paper, we propose to optimize the feature boundary of deep CNN through a two-stage training method (pre-training process and implicit regularization training process) to reduce the overfitting problem. In the pre-training stage, we train a network model to extract the image representation for anomaly detection. In the implicit regularization training stage, we re-train the network based on the anomaly detection results to regularize the feature boundary and make it converge in the proper position. Experimental results on five image classification benchmarks show that the two-stage training method achieves a state-of-the-art performance and that it, in conjunction with more complicated anomaly detection algorithm, obtains better results. Finally, we use a variety of strategies to explore and analyze how implicit regularization plays a role in the two-stage training process. Furthermore, we explain how implicit regularization can be interpreted as data augmentation and model ensemble.",
"title": ""
},
{
"docid": "bb416322f9ce64045f2bd98cfeacb715",
"text": "This abstract presents our preliminary results on development of a cognitive assistant system for emergency response that aims to improve situational awareness and safety of first responders. This system integrates a suite of smart wearable sensors, devices, and analytics for real-time collection and analysis of in-situ data from incident scene and providing dynamic data-driven insights to responders on the most effective response actions to take.",
"title": ""
},
{
"docid": "bd37aa47cf495c7ea327caf2247d28e4",
"text": "The purpose of this study is to identify the negative effects of social network sites such as Facebook among Asia Pacific University scholars. The researcher, distributed 152 surveys to students of the chosen university to examine and study the negative effects. Electronic communication is emotionally gratifying but how do such technological distraction impact on academic performance? Because of social media platform’s widespread adoption by university students, there is an interest in how Facebook is related to academic performance. This paper measure frequency of use, participation in activities and time spent preparing for class, in order to know if Facebook affects the performance of students. Moreover, the impact of social network site on academic performance also raised another major concern which is health. Today social network sites are running the future and carrier of students. Social network sites were only an electronic connection between users, but unfortunately it has become an addiction for students. This paper examines the relationship between social network sites and health threat. Lastly, the paper provides a comprehensive analysis of the law and privacy of Facebook. It shows how Facebook users socialize on the site, while they are not aware or misunderstand the risk involved and how their privacy suffers as a result.",
"title": ""
},
{
"docid": "bc0def2cdcb570feaee55293cea0c97f",
"text": "Inductive Logic Programming (ILP) is a new discipline which investigates the inductive construction of rst-order clausal theories from examples and background knowledge. We survey the most important theories and methods of this new eld. Firstly, various problem speciications of ILP are formalised in semantic settings for ILP, yielding a \\model-theory\" for ILP. Secondly, a generic ILP algorithm is presented. Thirdly, the inference rules and corresponding operators used in ILP are presented, resulting in a \\proof-theory\" for ILP. Fourthly, since inductive inference does not produce statements which are assured to follow from what is given, inductive inferences require an alternative form of justiication. This can take the form of either probabilistic support or logical constraints on the hypothesis language. Information compression techniques used within ILP are presented within a unifying Bayesian approach to connrmation and corroboration of hypotheses. Also, diierent ways to constrain the hypothesis language, or specify the declarative bias are presented. Fifthly, some advanced topics in ILP are addressed. These include aspects of computational learning theory as applied to ILP, and the issue of predicate invention. Finally, we survey some applications and implementations of ILP. ILP applications fall under two diierent categories: rstly scientiic discovery and knowledge acquisition, and secondly programming assistants.",
"title": ""
},
{
"docid": "139a89ce2fcdfb987aa3476d3618b919",
"text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.",
"title": ""
},
{
"docid": "941bb3a220f2a1088548cfb3093faf45",
"text": "This paper proposes a human detection method that combines range image segmentation and human detection based on image local features. The method uses a stereo vision system called Subtraction Stereo, which extracts a range image of foreground regions. An extracted range image is segmented for each object by Mean Shift Clustering. Human detection based on local features is applied to each segment of foreground regions to detect humans. In this process, regions to scan a detection window for extracting local features are restricted. In addition, the size of the detection window is obtained using the distance information of a range image and camera parameters. Therefore, processing time and false detection can be reduced. Joint HOG features are used as the image local features. When applying the Joint HOG based human detection, occlusion of multiple humans is considered in construction of a classifier and in integration of detection windows, which improves the detection performance for the occluded humans. The proposed method is evaluated by experiments comparing with the method using Joint HOG features only. 11fps fast human detection is achieved.",
"title": ""
},
{
"docid": "bbb4f7b90ade0ffbf7ba3e598c18a78f",
"text": "In this paper, an analysis of the resistance of multi-track coils in printed circuit board (PCB) implementations, where the conductors have rectangular cross-section, for spiral planar coils is carried out. For this purpose, different analytical losses models for the mentioned conductors have been reviewed. From this review, we conclude that for the range of frequencies, the coil dimensions and the planar configuration typically used in domestic induction heating, the application in which we focus, these analysis are unsatisfactory. Therefore, in this work the resistance of multi-track winding has been calculated by means of finite element analysis (FEA) tool. These simulations provide us some design guidelines that allow us to optimize the design of multi-track coils for domestic induction heating. Furthermore, several prototypes are used to verify the simulated results, both single-turn coils and multi-turn coils.",
"title": ""
},
{
"docid": "e967917f49df3fb6bc243326c68772cf",
"text": "From the first pacemaker implant in 1958, numerous engineering and medical activities for implantable medical device development have faced challenges in materials, battery power, functionality, electrical power consumption, size shrinkage, system delivery, and wireless communication. With explosive advances in scientific and engineering technology, many implantable medical devices such as the pacemaker, cochlear implant, and real-time blood pressure sensors have been developed and improved. This trend of progress in medical devices will continue because of the coming super-aged society, which will result in more consumers for the devices. The inner body is a special space filled with electrical, chemical, mechanical, and marine-salted reactions. Therefore, electrical connectivity and communication, corrosion, robustness, and hermeticity are key factors to be considered during the development stage. The main participants in the development stage are the user, the medical staff, and the engineer or technician. Thus, there are three different viewpoints in the development of implantable devices. In this review paper, considerations in the development of implantable medical devices will be presented from the viewpoint of an engineering mind.",
"title": ""
},
{
"docid": "5066a15ddba96311302889267b228301",
"text": "This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminary analysis based on two well-known visual attention models.",
"title": ""
},
{
"docid": "193c60c3a14fe3d6a46b2624d45b70aa",
"text": "*Corresponding author: Shirin Sadat Ghiasi. Faculty of Medicine, Mashhad University of Medical Sciences, Mahshhad, Iran. E-mail: shirin.ghiasi@gmail.com Tel:+989156511388 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons. org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A Review Study on the Prenatal Diagnosis of Congenital Heart Disease Using Fetal Echocardiography",
"title": ""
},
{
"docid": "7bb17491cb10db67db09bc98aba71391",
"text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.",
"title": ""
},
{
"docid": "146185b62f79a684ed72940a01190ac7",
"text": "Nearing 30 years since its introduction, 3D printing technology is set to revolutionize research and teaching laboratories. This feature encompasses the history of 3D printing, reviews various printing methods, and presents current applications. The authors offer an appraisal of the future direction and impact this technology will have on laboratory settings as 3D printers become more accessible.",
"title": ""
}
] |
scidocsrr
|
23a6b86e263bee0df6297d134d1132ba
|
Lifted Probabilistic Inference with Counting Formulas
|
[
{
"docid": "219a90eb2fd03cd6cc5d89fda740d409",
"text": "The general problem of computing poste rior probabilities in Bayesian networks is NP hard Cooper However e cient algorithms are often possible for particular applications by exploiting problem struc tures It is well understood that the key to the materialization of such a possibil ity is to make use of conditional indepen dence and work with factorizations of joint probabilities rather than joint probabilities themselves Di erent exact approaches can be characterized in terms of their choices of factorizations We propose a new approach which adopts a straightforward way for fac torizing joint probabilities In comparison with the clique tree propagation approach our approach is very simple It allows the pruning of irrelevant variables it accommo dates changes to the knowledge base more easily it is easier to implement More importantly it can be adapted to utilize both intercausal independence and condi tional independence in one uniform frame work On the other hand clique tree prop agation is better in terms of facilitating pre computations",
"title": ""
},
{
"docid": "8dc493568e94d94370f78e663da7df96",
"text": "Expertise in C++, C, Perl, Haskell, Linux system administration. Technical experience in compiler design and implementation, release engineering, network administration, FPGAs, hardware design, probabilistic inference, machine learning, web search engines, cryptography, datamining, databases (SQL, Oracle, PL/SQL, XML), distributed knowledge bases, machine vision, automated web content generation, 2D and 3D graphics, distributed computing, scientific and numerical computing, optimization, virtualization (Xen, VirtualBox). Also experience in risk analysis, finance, game theory, firm behavior, international economics. Familiar with Java, C++ Standard Template Library, Java Native Interface, Java Foundation Classes, Android development, MATLAB, CPLEX, NetPBM, Cascading Style Sheets (CSS), Tcl/Tk, Windows system administration, Mac OS X system administration, ElasticSearch, modifying the Ubuntu installer.",
"title": ""
},
{
"docid": "5536e605e0b8a25ee0a5381025484f60",
"text": "Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure’s running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks.",
"title": ""
},
{
"docid": "7fc6ffb547bc7a96e360773ce04b2687",
"text": "Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.",
"title": ""
},
{
"docid": "db897ae99b6e8d2fc72e7d230f36b661",
"text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.",
"title": ""
},
{
"docid": "93f1e6d0e14ce5aa07b32ca6bdf3dee4",
"text": "Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satis ability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, nding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called \\conditioning search\" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.",
"title": ""
}
] |
[
{
"docid": "190d238e9fd3701c01a8408258d0fac6",
"text": "Depression and anxiety load in families. In the present study, we focus on exposure to parental negative emotions in first postnatal year as a developmental pathway to early parent-to-child transmission of depression and anxiety. We provide an overview of the little research available on the links between infants' exposure to negative emotion and infants' emotional development in this developmentally sensitive period, and highlight priorities for future research. To address continuity between normative and maladaptive development, we discuss exposure to parental negative emotions in infants of parents with as well as without depression and/or anxiety diagnoses. We focus on infants' emotional expressions in everyday parent-infant interactions, and on infants' attention to negative facial expressions as early indices of emotional development. Available evidence suggests that infants' emotional expressions echo parents' expressions and reactions in everyday interactions. In turn, infants exposed more to negative emotions from the parent seem to attend less to negative emotions in others' facial expressions. The links between exposure to parental negative emotion and development hold similarly in infants of parents with and without depression and/or anxiety diagnoses. Given its potential links to infants' emotional development, and to later psychological outcomes in children of parents with depression and anxiety, we conclude that early exposure to parental negative emotions is an important developmental mechanism that awaits further research. Longitudinal designs that incorporate the study of early exposure to parents' negative emotion, socio-emotional development in infancy, and later psychological functioning while considering other genetic and biological vulnerabilities should be prioritized in future research.",
"title": ""
},
{
"docid": "986f469fc8d367baa8ad0db10caf3241",
"text": "While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.",
"title": ""
},
{
"docid": "3ddf82be24ab5e20c141f67dfde05fdc",
"text": "In August 1998, Texas AM University implemented on campus a trap-test-vaccinate-alter-return-monitor (TTVARM) program to manage the feral cat population. TTVARM is an internationally recognized term for trapping and neutering programs aimed at management of feral cat populations. In this article we summarize results of the program for the period August 1998 to July 2000. In surgery laboratories, senior veterinary students examined cats that were humanely trapped once a month and tested them for feline leukemia and feline immunodeficiency virus infections, vaccinated, and surgically neutered them. They euthanized cats testing positive for either infectious disease. Volunteers provided food and observed the cats that were returned to their capture sites on campus and maintained in managed colonies. The program placed kittens and tame cats for adoption; cats totaled 158. Of the majority of 158 captured cats, there were less kittens caught in Year 2 than in Year 1. The proportion of tame cats trapped was significantly greater in Year 2 than in Year 1. The prevalence found for feline leukemia and feline immunodeficiency virus ELISA test positives was 5.8% and 6.5%, respectively. Following surgery, 101 cats returned to campus. The project recaptured, retested, and revaccinated more than one-fourth of the cats due for their annual vaccinations. The program placed 32 kittens, juveniles, and tame adults for adoption. The number of cat complaints received by the university's pest control service decreased from Year 1 to Year 2.",
"title": ""
},
{
"docid": "3f0d37296258c68a20da61f34364405d",
"text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.",
"title": ""
},
{
"docid": "7ce147a433a376dd1cc0f7f09576e1bd",
"text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).",
"title": ""
},
{
"docid": "2c3566048334e60ae3f30bd631e4da87",
"text": "The Indian Railways is world's fourth largest railway network in the world after USA, Russia and China. There is a severe problem of collisions of trains. So Indian railway is working in this aspect to promote the motto of "SAFE JOURNEY". A RFID based railway track finding system for railway has been proposed in this paper. In this system the RFID tags and reader are used which are attached in the tracks and engine consecutively. So Train engine automatically get the data of path by receiving it from RFID tag and detect it. If path is correct then train continue to run on track and if it is wrong then a signal is generated and sent to the control station and after this engine automatically stop in a minimum time and the display of LCD show the "WRONG PATH". So the collision and accident of train can be avoided. With the help of this system the train engine would be programmed to move according to the requirement. The another feature of this system is automatic track changer by which the track jointer would move automatically according to availability of trains.",
"title": ""
},
{
"docid": "08fedcf80c0905de2598ccd45da706a5",
"text": "Translation of named entities (NEs), such as person names, organization names and location names is crucial for cross lingual information retrieval, machine translation, and many other natural language processing applications. Newly named entities are introduced on daily basis in newswire and this greatly complicates the translation task. Also, while some names can be translated, others must be transliterated, and, still, others are mixed. In this paper we introduce an integrated approach for named entity translation deploying phrase-based translation, word-based translation, and transliteration modules into a single framework. While Arabic based, the approach introduced here is a unified approach that can be applied to NE translation for any language pair.",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
},
{
"docid": "4451f35b38f0b3af0ff006d8995b0265",
"text": "Social media together with still growing social media communities has become a powerful and promising solution in crisis and emergency management. Previous crisis events have proved that social media and mobile technologies used by citizens (widely) and public services (to some extent) have contributed to the post-crisis relief efforts. The iSAR+ EU FP7 project aims at providing solutions empowering citizens and PPDR (Public Protection and Disaster Relief) organizations in online and mobile communications for the purpose of crisis management especially in search and rescue operations. This paper presents the results of survey aiming at identification of preliminary end-user requirements in the close interworking with end-users across Europe.",
"title": ""
},
{
"docid": "6646b66370ed02eb84661c8505eb7563",
"text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.",
"title": ""
},
{
"docid": "fb8b90ccf64f64e7f5c4e2c6718107df",
"text": "The Standardized Precipitation Evapotranspiration Index (SPEI) was developed in 2010 and has been used in an increasing number of climatology and hydrology studies. The objective of this article is to describe computing options that provide flexible and robust use of the SPEI. In particular, we present methods for estimating the parameters of the log-logistic distribution for obtaining standardized values, methods for computing reference evapotranspiration (ET0), and weighting kernels used for calculation of the SPEI at different time scales. We discuss the use of alternative ET0 and actual evapotranspiration (ETa) methods and different options on the resulting SPEI series by use of observational and global gridded data. The results indicate that the equation used to calculate ET0 can have a significant effect on the SPEI in some regions of the world. Although the original formulation of the SPEI was based on plotting-positions Probability Weighted Moment (PWM), we now recommend use of unbiased PWM for model fitting. Finally, we present new software tools for computation and analysis of SPEI series, an updated global gridded database, and a realtime drought-monitoring system.",
"title": ""
},
{
"docid": "31756ac6aaa46df16337dbc270831809",
"text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。",
"title": ""
},
{
"docid": "83ed8190d8f0715d79580043b83d3620",
"text": "We describe a probabilistic method for identifying characters in TV series or movies. We aim at labeling every character appearance, and not only those where a face can be detected. Consequently, our basic unit of appearance is a person track (as opposed to a face track). We model each TV series episode as a Markov Random Field, integrating face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In order to identify tracks without faces, we learn clothing models by adapting available face recognition results. Within a scene, as indicated by prior analysis of the temporal structure of the TV series, clothing features are combined by agglomerative clustering. We evaluate our approach on the first 6 episodes of The Big Bang Theory and achieve an absolute improvement of 20% for person identification and 12% for face recognition.",
"title": ""
},
{
"docid": "97711981f9bfe4f9ba7b2070427988d4",
"text": "Mathematical models have been used to provide an explicit framework for understanding malaria transmission dynamics in human population for over 100 years. With the disease still thriving and threatening to be a major source of death and disability due to changed environmental and socio-economic conditions, it is necessary to make a critical assessment of the existing models, and study their evolution and efficacy in describing the host-parasite biology. In this article, starting from the basic Ross model, the key mathematical models and their underlying features, based on their specific contributions in the understanding of spread and transmission of malaria have been discussed. The first aim of this article is to develop, starting from the basic models, a hierarchical structure of a range of deterministic models of different levels of complexity. The second objective is to elaborate, using some of the representative mathematical models, the evolution of modelling strategies to describe malaria incidence by including the critical features of host-vector-parasite interactions. Emphasis is more on the evolution of the deterministic differential equation based epidemiological compartment models with a brief discussion on data based statistical models. In this comprehensive survey, the approach has been to summarize the modelling activity in this area so that it helps reach a wider range of researchers working on epidemiology, transmission, and other aspects of malaria. This may facilitate the mathematicians to further develop suitable models in this direction relevant to the present scenario, and help the biologists and public health personnel to adopt better understanding of the modelling strategies to control the disease",
"title": ""
},
{
"docid": "4ae7e3cb36dd23cfe41e743e47844cb7",
"text": "We present a voltage-scalable and process-variation resilient memory architecture, suitable for MPEG-4 video processors such that power dissipation can be traded for graceful degradation in \"quality\". The key innovation in our proposed work is a hybrid memory array, which is mixture of conventional 6T and 8T SRAM bit-cells. The fundamental premise of our approach lies in the fact that human visual system (HVS) is mostly sensitive to higher order bits of luminance pixels in video data. We implemented a preferential storage policy in which the higher order luma bits are stored in robust 8T bit-cells while the lower order bits are stored in conventional 6T bit-cells. This facilitates aggressive scaling of supply voltage in memory as the important luma bits, stored in 8T bit-cells, remain relatively unaffected by voltage scaling. The not-so-important lower order luma bits, stored in 6T bit-cells, if affected, contribute insignificantly to the overall degradation in output video quality. Simulation results show average power savings of up to 56%, in the hybrid memory array compared to the conventional 6T SRAM array implemented in 65nm CMOS. The area overhead and maximum output quality degradation (PSNR) incurred were 11.5% and 0.56 dB, respectively.",
"title": ""
},
{
"docid": "6c682f3412cc98eac5ae2a2356dccef7",
"text": "Since their inception, micro-size light emitting diode (µLED) arrays based on III-nitride semiconductors have emerged as a promising technology for a range of applications. This paper provides an overview on a decade progresses on realizing III-nitride µLED based high voltage single-chip AC/DC-LEDs without power converters to address the key compatibility issue between LEDs and AC power grid infrastructure; and high-resolution solid-state self-emissive microdisplays operating in an active driving scheme to address the need of high brightness, efficiency and robustness of microdisplays. These devices utilize the photonic integration approach by integrating µLED arrays on-chip. Other applications of nitride µLED arrays are also discussed.",
"title": ""
},
{
"docid": "b4dd76179734fb43e74c9c1daef15bbf",
"text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.",
"title": ""
},
{
"docid": "4bc73a7e6a6975ba77349cac62a96c18",
"text": "BACKGROUND\nIn May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear.\n\n\nMETHODS\nThe study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play.\n\n\nRESULTS\nOn average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found.\n\n\nCONCLUSIONS\nThe present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.",
"title": ""
},
{
"docid": "9172d4ba2e86a7d4918ef64d7b837084",
"text": "Electromagnetic generators (EMGs) and triboelectric nanogenerators (TENGs) are the two most powerful approaches for harvesting ambient mechanical energy, but the effectiveness of each depends on the triggering frequency. Here, after systematically comparing the performances of EMGs and TENGs under low-frequency motion (<5 Hz), we demonstrated that the output performance of EMGs is proportional to the square of the frequency, while that of TENGs is approximately in proportion to the frequency. Therefore, the TENG has a much better performance than that of the EMG at low frequency (typically 0.1-3 Hz). Importantly, the extremely small output voltage of the EMG at low frequency makes it almost inapplicable to drive any electronic unit that requires a certain threshold voltage (∼0.2-4 V), so that most of the harvested energy is wasted. In contrast, a TENG has an output voltage that is usually high enough (>10-100 V) and independent of frequency so that most of the generated power can be effectively used to power the devices. Furthermore, a TENG also has advantages of light weight, low cost, and easy scale up through advanced structure designs. All these merits verify the possible killer application of a TENG for harvesting energy at low frequency from motions such as human motions for powering small electronics and possibly ocean waves for large-scale blue energy.",
"title": ""
},
{
"docid": "0a790469194c1984ae2175d9ea49688c",
"text": "Gynecomastia refers to a benign enlargement of the male breast. This article describes the authors’ method of using power-assisted liposuction and gland removal through a subareolar incision for thin patients. Power-assisted liposuction is performed for removal of fatty breast tissue in the chest area to allow skin retraction. The subareolar incision is used to remove glandular tissue from a male subject considered to be within a normal weight range but who has bilateral grade 1 or 2 gynecomastia. Gynecomastia correction was successfully performed for all the patients. The average volume of aspirated fat breast was 100–200 ml on each side. Each breast had 5–80 g of breast tissue removed. At the 3-month, 6-month, and 1-year follow-up assessments, all the treated patients were satisfied with their aesthetic results. Liposuction has the advantages of reducing the fat tissue where necessary to allow skin retraction and of reducing the traces left by surgery. The combination of surgical excision and power-assisted lipoplasty also is a valid choice for the treatment of thin patients.",
"title": ""
}
] |
scidocsrr
|
26cd0a06689903372e740f3bc1dd0e1c
|
Visual Relationship Prediction via Label Clustering and Incorporation of Depth Information
|
[
{
"docid": "d71040311b8753299377b02023ba5b4c",
"text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"title": ""
},
{
"docid": "4d59fd865447cfd1d54623e267af491c",
"text": "Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"title": ""
},
{
"docid": "df9acaed8dbcfbd38a30e4e1fa77aa8a",
"text": "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
"title": ""
},
{
"docid": "c3827ca529fa0ffd60cc192a08b87d92",
"text": "We present the first fully convolutional end-to-end solution for instance-aware semantic segmentation task. It inherits all the merits of FCNs for semantic segmentation [29] and instance mask proposal [5]. It performs instance mask prediction and classification jointly. The underlying convolutional representation is fully shared between the two sub-tasks, as well as between all regions of interest. The network architecture is highly integrated and efficient. It achieves state-of-the-art performance in both accuracy and efficiency. It wins the COCO 2016 segmentation competition by a large margin. Code would be released at https://github.com/daijifeng001/TA-FCN.",
"title": ""
}
] |
[
{
"docid": "98bc8bf4166f8e415dc96873e8d9201f",
"text": "Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense. In this paper, we present a story comprehension model that explores three distinct semantic aspects: (i) the sequence of events described in the story, (ii) its emotional trajectory, and (iii) its plot consistency. We judge the model’s understanding of real-world stories by inquiring if, like humans, it can develop an expectation of what will happen next in a given story. Specifically, we use it to predict the correct ending of a given short story from possible alternatives. The model uses a hidden variable to weigh the semantic aspects in the context of the story. Our experiments demonstrate the potential of our approach to characterize these semantic aspects, and the strength of the hidden variable based approach. The model outperforms the stateof-the-art approaches and achieves best results on a publicly available dataset.",
"title": ""
},
{
"docid": "61909a81470a9fea27a2f12aadb2c183",
"text": "One of the major research areas attracting much interest is face recognition. This is due to the growing need of detection and recognition in the modern days' industrial applications. However, this need is conditioned with the high performance standards that these applications require in terms of speed and accuracy. In this work we present a comparison between two main techniques of face recognition in unconstraint scenes. The first one is Edge-Orientation Matching and the second technique is Haar-like feature selection combined cascade classifiers.",
"title": ""
},
{
"docid": "95365d5f04b2cefcca339fbc19464cbb",
"text": "Manipulation and re-use of images in scientific publications is a concerning problem that currently lacks a scalable solution. Current tools for detecting image duplication are mostly manual or semi-automated, despite the availability of an overwhelming target dataset for a learning-based approach. This paper addresses the problem of determining if, given two images, one is a manipulated version of the other by means of copy, rotation, translation, scale, perspective transform, histogram adjustment, or partial erasing. We propose a data-driven solution based on a 3-branch Siamese Convolutional Neural Network. The ConvNet model is trained to map images into a 128-dimensional space, where the Euclidean distance between duplicate images is smaller than or equal to 1, and the distance between unique images is greater than 1. Our results suggest that such an approach has the potential to improve surveillance of the published and in-peer-review literature for image manipulation.",
"title": ""
},
{
"docid": "e2d647ae5e758796069a36ffa44cf90a",
"text": "We describe a Genetic Algorithm that can evolve complete programs. Using a variable length linear genome to govern how a Backus Naur Form grammar deenition is mapped to a program, expressions and programs of arbitrary complexity may be evolved. Other automatic programming methods are described, before our system, Grammatical Evolution, is applied to a symbolic regression problem.",
"title": ""
},
{
"docid": "6d262d30db4d6db112f40e5820393caf",
"text": "This study sought to examine the effects of service quality and customer satisfaction on the repurchase intentions of customers of restaurants on University of Cape Coast Campus. The survey method was employed involving a convenient sample of 200 customers of 10 restaurants on the University of Cape Coast Campus. A modified DINESERV scale was used to measure customers’ perceived service quality. The results of the study indicate that four factors accounted for 50% of the variance in perceived service quality, namely; responsivenessassurance, empathy-equity, reliability and tangibles. Service quality was found to have a significant effect on customer satisfaction. Also, both service quality and customer satisfaction had significant effects on repurchase intention. However, customer satisfaction could not moderate the effect of service quality on repurchase intention. This paper adds to the debate on the dimensions of service quality and provides evidence on the effects of service quality and customer satisfaction on repurchase intention in a campus food service context.",
"title": ""
},
{
"docid": "d4f3cc4ac102fc922499001c8a8ab6af",
"text": "This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue of Geriatrics & Aging) began with an approach to the neurological examination in normal aging and in disease,and reviewed components of the general physical, head and neck, neurovascular and cranial nerve examinations relevant to aging and dementia. Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3, featured here, reviews the assessment of coordination,balance and gait,and Part 4 will discuss the muscle stretch reflexes, pathological and primitive reflexes, sensory examination and concluding remarks. Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the",
"title": ""
},
{
"docid": "558abc8028d1d5b6956d2cf046efb983",
"text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.",
"title": ""
},
{
"docid": "53e6216c2ad088dfcf902cc0566072c6",
"text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. Because module temperature of floating PV system is lower than that of overland PV system, the floating PV system has 11% better generation efficiency than overland PV system. In the thesis, superiority of floating PV system is verified through comparison analysis of generation amount by 2.4kW, 100kW and 500kW floating PV system installed by K-water and the cause of such superiority was analyzed. Also, effect of wind speed, and waves on floating PV system structure was measured to analyze the effect of the environment on floating PV system generation efficiency.",
"title": ""
},
{
"docid": "0a78c9305d4b5584e87327ba2236d302",
"text": "This paper presents GeoS, a new algorithm for the efficient segmentation of n-dimensional image and video data. The segmentation problem is cast as approximate energy minimization in a conditional random field. A new, parallel filtering operator built upon efficient geodesic distance computation is used to propose a set of spatially smooth, contrast-sensitive segmentation hypotheses. An economical search algorithm finds the solution with minimum energy within a sensible and highly restricted subset of all possible labellings. Advantages include: i) computational efficiency with high segmentation accuracy; ii) the ability to estimate an approximation to the posterior over segmentations; iii) the ability to handle generally complex energy models. Comparison with max-flow indicates up to 60 times greater computational efficiency as well as greater memory efficiency. GeoS is validated quantitatively and qualitatively by thorough comparative experiments on existing and novel ground-truth data. Numerous results on interactive and automatic segmentation of photographs, video and volumetric medical image data are presented.",
"title": ""
},
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
},
{
"docid": "c72fc4fd63c64c2980d10aecf03a50c8",
"text": "Unmasking the non-functional requirements (NFRs) such as quality attributes, interface requirements and design constraints of software is crucial in finding the architectural alternatives for software starting from early design opinions. For developing quality software product, extraction of NFRs from requirement documents is needed to be carried out and it's beneficiary if this process becomes automated, reducing the human efforts, time and mental fatigue involved in identifying specific requirements from a large number of requirements in a document. The proposal presented in this paper combines automated identification and classification of requirement sentences into NFR sub-classes with the help of rule-based classification technique using thematic roles and identifying the priority of extracted NFR sentences within the document according to their occurrence in multiple NFR classes. F1-measure of 97% is obtained on PROMISE corpus and 94% F1-Measure on Concordia RE corpus. The results established validates the claim that proposal provides specific and higher results than the previous state of art approaches.",
"title": ""
},
{
"docid": "0cbb6fc4d4cfbb7f7fa123cdefa18b7c",
"text": "We investigate the potential of attention-based neural machine translation in simultaneous translation. We introduce a novel decoding algorithm, called simultaneous greedy decoding, that allows an existing neural machine translation model to begin translating before a full source sentence is received. This approach is unique from previous works on simultaneous translation in that segmentation and translation are done jointly to maximize the translation quality and that translating each segment is strongly conditioned on all the previous segments. This paper presents a first step toward building a full simultaneous translation system based on neural machine translation.",
"title": ""
},
{
"docid": "66c2fcf1076796bb0a7fa16b18eac612",
"text": "A firewall is a security guard placed at the point of entry between a private network and the outside Internet such that all incoming and outgoing packets have to pass through it. The function of a firewall is to examine every incoming or outgoing packet and decide whether to accept or discard it. This function is conventionally specified by a sequence of rules, where rules often conflict. To resolve conflicts, the decision for each packet is the decision of the first rule that the packet matches. The current practice of designing a firewall directly as a sequence of rules suffers from three types of major problems: (1) the consistency problem, which means that it is difficult to order the rules correctly; (2) the completeness problem, which means that it is difficult to ensure thorough consideration for all types of traffic; (3) the compactness problem, which means that it is difficult to keep the number of rules small (because some rules may be redundant and some rules may be combined into one rule). To achieve consistency, completeness, and compactness, we propose a new method called Structured Firewall Design, which consists of two steps. First, one designs a firewall using a Firewall Decision Diagram instead of a sequence of often conflicting rules. Second, a program converts the firewall decision diagram into a compact, yet functionally equivalent, sequence of rules. This method addresses the consistency problem because a firewall decision diagram is conflict-free. It addresses the completeness problem because the syntactic requirements of a firewall decision diagram force the designer to consider all types of traffic. It also addresses the compactness problem because in the second step we use two algorithms (namely FDD reduction and FDD marking) to combine rules together, and one algorithm (namely Firewall compaction) to remove redundant rules. Moreover, the techniques and algorithms presented in this paper are extensible to other rule-based systems such as IPsec rules.",
"title": ""
},
{
"docid": "084b83aed850aca07bed298de455c110",
"text": "Leveraging built-in cameras on smartphones and tablets, face authentication provides an attractive alternative of legacy passwords due to its memory-less authentication process. However, it has an intrinsic vulnerability against the media-based facial forgery (MFF) where adversaries use photos/videos containing victims' faces to circumvent face authentication systems. In this paper, we propose FaceLive, a practical and robust liveness detection mechanism to strengthen the face authentication on mobile devices in fighting the MFF-based attacks. FaceLive detects the MFF-based attacks by measuring the consistency between device movement data from the inertial sensors and the head pose changes from the facial video captured by built-in camera. FaceLive is practical in the sense that it does not require any additional hardware but a generic front-facing camera, an accelerometer, and a gyroscope, which are pervasively available on today's mobile devices. FaceLive is robust to complex lighting conditions, which may introduce illuminations and lead to low accuracy in detecting important facial landmarks; it is also robust to a range of cumulative errors in detecting head pose changes during face authentication.",
"title": ""
},
{
"docid": "8346980ed9e27cd98942a33df1e62792",
"text": "tems. File modes and some partitioning schemes are the only means a user has to specify desired access patterns. These are not sufficient for the user to specify the file structure matching the access patterns of particular applications since they were developed under the abstraction of a one-dimensional file. This paper presents the design of UPIO, a software for User-controllable Parallel Input and Output, and our experience in producing high-performance external computation codes using UPIO. UPIO is designed to maximize 1/0 performance for scientific applications on MIMD multicomputers. It allows users to determine the file structure by considering the access pat terns of particular applications and the distribution of data for parallel access, and do 1 / 0 collectively. This enables users l o produce high-performance external computation codes by planning I/O, computations, communication, and the reuse of data effectively in the codes. We show how well UPIO produces highperformance external computation codes by designing 1 / 0 arid rrierriory-efficient external matrix multiplication algorithms and exploring the effects of UPIO with the codes. The rest of this paper is organized as follows. In Section 2 we briefly describe the functionality of UPIO, including our architecture model. 1 / 0 and memory-efficient matrix multiplication algorithm using UP10 and the experimental results are presented in Section 3 and 4. Finally, we make some concluding remarks.",
"title": ""
},
{
"docid": "b912ed016c6ec20ae362bbd60c4ebadb",
"text": "BACKGROUND\nAs the prevalence of obesity increases in developing countries, the double burden of malnutrition (DBM) has become a public health problem, particularly in countries such as Guatemala with a high concentration of indigenous communities where the prevalence of stunting remains high.\n\n\nOBJECTIVE\nThe aim was to describe and analyze the prevalence of DBM over time (1998-2008) in indigenous and nonindigenous Guatemalan populations.\n\n\nDESIGN\nWe used 3 National Maternal and Child Health Surveys conducted in Guatemala between 1998 and 2008 that include anthropometric data from children aged 0-60 mo and women of reproductive age (15-49 y). We assessed the prevalence of childhood stunting and both child and adult female overweight and obesity between 1998 and 2008. For the year 2008, we assessed the prevalence of DBM at the household (a stunted child and an overweight mother) and individual (stunting/short stature and overweight or anemia and overweight in the same individual) levels and compared the expected and observed prevalence rates to test if the coexistence of the DBM conditions corresponded to expected values.\n\n\nRESULTS\nBetween 1998 and 2008, the prevalence of childhood stunting decreased in both indigenous and nonindigenous populations, whereas overweight and obesity in women increased faster in indigenous populations than in nonindigenous populations (0.91% compared with 0.38%/y; P-trend < 0.01). In 2008, the prevalence of stunted children was 28.8 percentage points higher and of overweight women 4.6 percentage points lower in indigenous compared with nonindigenous populations (63.7% compared with 34.9% and 46.7% compared with 51.3%, respectively). DBM at the household and individual levels was higher in indigenous populations and was higher in geographic areas in which most of the population was indigenous, where there was also a greater prevalence of stunting and DBM at the individual level, both in women and children.\n\n\nCONCLUSIONS\nIn Guatemala, DBM is more prevalent in indigenous than in nonindigenous populations at the household and individual levels. To enhance effectiveness, current strategies of national policies and programs should consider DBM and focus on indigenous populations.",
"title": ""
},
{
"docid": "f7c47b9447af707e9ce212fc35a1f404",
"text": "The article describes the method of malware activities identification using ontology and rules. The method supports detection of malware at host level by observing its behavior. It sifts through hundred thousands of regular events and allows to identify suspicious ones. They are then passed on to the second building block responsible for malware tracking and matching stored models with observed malicious actions. The presented method was implemented and verified in the infected computer environment. As opposed to signature-based antivirus mechanisms it allows to detect malware the code of which has been obfuscated.",
"title": ""
},
{
"docid": "3fffd4317116d8ff0165916681ce1c46",
"text": "The challenges of Machine Reading and Knowledge Extraction at a web scale require a system capable of extracting diverse information from large, heterogeneous corpora. The Open Information Extraction (OIE) paradigm aims at extracting assertions from large corpora without requiring a vocabulary or relation-specific training data. Most systems built on this paradigm extract binary relations from arbitrary sentences, ignoring the context under which the assertions are correct and complete. They lack the expressiveness needed to properly represent and extract complex assertions commonly found in the text. To address the lack of representation power, we propose NESTIE, which uses a nested representation to extract higher-order relations, and complex, interdependent assertions. Nesting the extracted propositions allows NESTIE to more accurately reflect the meaning of the original sentence. Our experimental study on real-world datasets suggests that NESTIE obtains comparable precision with better minimality and informativeness than existing approaches. NESTIE produces 1.7-1.8 times more minimal extractions and achieves 1.1-1.2 times higher informativeness than CLAUSIE.",
"title": ""
},
{
"docid": "7fc335731c6394c17078d8bba67c0c2c",
"text": "This paper proposes a method for detecting changes of a scene using a pair of its vehicular, omnidirectional images. Previous approaches to the problem require the use of a 3D scene model and/or pixel-level registration between different time images. They are also computationally costly for estimating city-scale changes. We propose a novel change detection method that uses features of convolutional neural network (CNN) in combination with superpixel segmentation. Comparison of CNN features gives a lowresolution map of scene changes that is robust to illumination changes and viewpoint differences. Superpixel segmentation of the scene images is integrated with this lowresolution map to estimate precise segmentation boundaries of the changes. Our motivation is to develop a method for detecting city-scale changes, which can be used for visualization of damages of a natural disaster and subsequent recovery processes as well as for the purpose of maintaining/updating the 3D model of a city. We have created a dataset named Panoramic Change Detection Dataset, which will be made publicly available for evaluating the performances of change detection methods in these scenarios. The experimental results using the dataset show the effectiveness of our approach.",
"title": ""
}
] |
scidocsrr
|
bcf17bd1af85a9b8de2c770d22f0ac47
|
Your Voice Assistant is Mine: How to Abuse Speakers to Steal Information and Control Your Phone
|
[
{
"docid": "6ee601387e550e896b3a3938016b03f7",
"text": "Android phone manufacturers are under the perpetual pressure to move quickly on their new models, continuously customizing Android to fit their hardware. However, the security implications of this practice are less known, particularly when it comes to the changes made to Android's Linux device drivers, e.g., those for camera, GPS, NFC etc. In this paper, we report the first study aimed at a better understanding of the security risks in this customization process. Our study is based on ADDICTED, a new tool we built for automatically detecting some types of flaws in customized driver protection. Specifically, on a customized phone, ADDICTED performs dynamic analysis to correlate the operations on a security-sensitive device to its related Linux files, and then determines whether those files are under-protected on the Linux layer by comparing them with their counterparts on an official Android OS. In this way, we can detect a set of likely security flaws on the phone. Using the tool, we analyzed three popular phones from Samsung, identified their likely flaws and built end-to-end attacks that allow an unprivileged app to take pictures and screenshots, and even log the keys the user enters through touch screen. Some of those flaws are found to exist on over a hundred phone models and affect millions of users. We reported the flaws and helped the manufacturers fix those problems. We further studied the security settings of device files on 2423 factory images from major phone manufacturers, discovered over 1,000 vulnerable images and also gained insights about how they are distributed across different Android versions, carriers and countries.",
"title": ""
}
] |
[
{
"docid": "14bb62c02192f837303dcc2e327475a6",
"text": "In this paper, we have proposed three kinds of network security situation awareness (NSSA) models. In the era of big data, the traditional NSSA methods cannot analyze the problem effectively. Therefore, the three models are designed for big data. The structure of these models are very large, and they are integrated into the distributed platform. Each model includes three modules: network security situation detection (NSSD), network security situation understanding (NSSU), and network security situation projection (NSSP). Each module comprises different machine learning algorithms to realize different functions. We conducted a comprehensive study of the safety of these models. Three models compared with each other. The experimental results show that these models can improve the efficiency and accuracy of data processing when dealing with different problems. Each model has its own advantages and disadvantages.",
"title": ""
},
{
"docid": "afddd19cb7c08820cf6f190d07bed8eb",
"text": "This paper presents a method for stand-still identification of parameters in a permanent magnet synchronous motor (PMSM) fed from an inverter equipped with an three-phase LCtype output filter. Using a special random modulation strategy, the method uses the inverter for broad-band excitation of the PMSM fed through an LC-filter. Based on the measured current response, model parameters for both the filter (L, R, C) and the PMSM (L and R) are estimated: First, the frequency response of the system is estimated using Welch Modified Periodogram method and then an optimization algorithm is used to find the parameters in an analytical reference model that minimize the model error. To demonstrate the practical feasibility of the method, a fully functional drive including an embedded real-time controller has been built. In addition to modulation, data acquisition and control the whole parameter identification method is also implemented on the real-time controller. Based on laboratory experiments on a 22 kW drive, it it concluded that the embedded identification method can estimate the five parameters in less than ten seconds.",
"title": ""
},
{
"docid": "9f933f59d2a7852d1ce5dc986d056928",
"text": "The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communication systems that scavenge received energy. A capacity-energy function is defined and a coding theorem is given. The capacity-energy function is a non-increasing concave cap function. Capacity-energy functions for several channels are computed.",
"title": ""
},
{
"docid": "4654a1926d0caa787ade6aaf58e00474",
"text": "GitHub is the most widely used social, distributed version control system. It has around 10 million registered users and hosts over 16 million public repositories. Its user base is also very active as GitHub ranks in the top 100 Alexa most popular websites. In this study, we collect GitHub’s state in its entirety. Doing so, allows us to study new aspects of the ecosystem. Although GitHub is the home to millions of users and repositories, the analysis of users’ activity time-series reveals that only around 10% of them can be considered active. The collected dataset allows us to investigate the popularity of programming languages and existence of pattens in the relations between users, repositories, and programming languages. By, applying a k-means clustering method to the usersrepositories commits matrix, we find that two clear clusters of programming languages separate from the remaining. One cluster forms for “web programming” languages (Java Script, Ruby, PHP, CSS), and a second for “system oriented programming” languages (C, C++, Python). Further classification, allow us to build a phylogenetic tree of the use of programming languages in GitHub. Additionally, we study the main and the auxiliary programming languages of the top 1000 repositories in more detail. We provide a ranking of these auxiliary programming languages using various metrics, such as percentage of lines of code, and PageRank.",
"title": ""
},
{
"docid": "8e180c13b925188f1925fee03c641669",
"text": "“Web applications have become increasingly complex and highly vulnerable,” says Peter Wood, member of the ISACA Security Advisory Group and CEO of First Base Technologies. “Social networking sites, consumer technologies – smartphones, tablets etc – and cloud services are all game changers this year. More enterprises are now requesting social engineering tests, which shows an increased awareness of threats beyond website attacks.”",
"title": ""
},
{
"docid": "d836e5c3ef7742b6dfb47c46672fa251",
"text": "Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.",
"title": ""
},
{
"docid": "0553d2c43f4382efc9718616301b1da9",
"text": "This paper presents an XML-based Adaptive Hypermedia Model (XAHM) and its modular architecture, for modelling and supporting Adaptive Hypermedia Systems, i.e. hypertext-based multimedia systems that allow user-driven access to information and content personalization. We propose a graph-based layered model for the description of the logical structure of the hypermedia, and XML-based models for the description of i) metadata about basic information fragments and ii) \"neutral\" pages to be adapted. Furthermore, we describe a modular architecture, which allows the design of the hypermedia and its run-time support. We introduce a multidimensional approach to model different aspects of the adaptation process, which is based on three different \"adaptivity dimensions\": user’s behaviour (preferences and browsing activity), technology (network and user’s terminal) and external environment (time, location, language, socio-political issues, etc.). An Adaptive Hypermedia is modelled with respect to such dimensions, and a view over it corresponds to each potential position of the user in the \"adaptation space\"; the model supports the adaptation of both contents and link structure of the hypermedia.",
"title": ""
},
{
"docid": "3535e70b1c264d99eff5797413650283",
"text": "MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.",
"title": ""
},
{
"docid": "b6160256dd6877fea4cec96b74ebc03a",
"text": "A cascaded long short-term memory (LSTM) architecture with discriminant feature learning is proposed for the task of question answering on real world images. The proposed LSTM architecture jointly learns visual features and parts of speech (POS) tags of question words or tokens. Also, dimensionality of deep visual features is reduced by applying Principal Component Analysis (PCA) technique. In this manner, the proposed question answering model captures the generic pattern of question for a given context of image which is just not constricted within the training dataset. Empirical outcome shows that this kind of approach significantly improves the accuracy. It is believed that this kind of generic learning is a step towards a real-world visual question answering (VQA) system which will perform well for all possible forms of open-ended natural language queries.",
"title": ""
},
{
"docid": "4a5a5958eaf3a011a04d4afc1155e521",
"text": "1 Department of Geography, University of Kentucky, Lexington, Kentucky, United States of America, 2 Microsoft Research, New York, New York, United States of America, 3 Data & Society, New York, New York, United States of America, 4 Information Law Institute, New York University, New York, New York, United States of America, 5 Department of Media and Communications, London School of Economics, London, United Kingdom, 6 Harvard-Smithsonian Center for Astrophysics, Harvard University, Cambridge, Massachusetts, United States of America, 7 Center for Engineering Ethics and Society, National Academy of Engineering, Washington, DC, United States of America, 8 Institute for Health Aging, University of California-San Francisco, San Francisco, California, United States of America, 9 Ethical Resolve, Santa Cruz, California, United States of America, 10 Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America, 11 Department of Sociology, Columbia University, New York, New York, United States of America, 12 Carey School of Law, University of Maryland, Baltimore, Maryland, United States of America",
"title": ""
},
{
"docid": "94d512ad8ddc7c788f71a014b0c0bdec",
"text": "In this paper, we propose a new Soft Confidence-Weighted (SCW) online learning scheme, which enables the conventional confidence-weighted learning method to handle non-separable cases. Unlike the previous confidence-weighted learning algorithms, the proposed soft confidence-weighted learning method enjoys all the four salient properties: (i) large margin training, (ii) confidence weighting, (iii) capability to handle non-separable data, and (iv) adaptive margin. Our experimental results show that the proposed SCW algorithms significantly outperform the original CW algorithm. When comparing with a variety of state-of-theart algorithms (including AROW, NAROW and NHERD), we found that SCW generally achieves better or at least comparable predictive accuracy, but enjoys significant advantage of computational efficiency (i.e., smaller number of updates and lower time cost).",
"title": ""
},
{
"docid": "2e8a644c6412f9b490bad0e13e11794d",
"text": "The traditional wisdom for building disk-based relational database management systems (DBMS) is to organize data in heavily-encoded blocks stored on disk, with a main memory block cache. In order to improve performance given high disk latency, these systems use a multi-threaded architecture with dynamic record-level locking that allows multiple transactions to access the database at the same time. Previous research has shown that this results in substantial overhead for on-line transaction processing (OLTP) applications [15]. The next generation DBMSs seek to overcome these limitations with architecture based on main memory resident data. To overcome the restriction that all data fit in main memory, we propose a new technique, called anti-caching, where cold data is moved to disk in a transactionally-safe manner as the database grows in size. Because data initially resides in memory, an anti-caching architecture reverses the traditional storage hierarchy of disk-based systems. Main memory is now the primary storage device. We implemented a prototype of our anti-caching proposal in a high-performance, main memory OLTP DBMS and performed a series of experiments across a range of database sizes, workload skews, and read/write mixes. We compared its performance with an open-source, disk-based DBMS optionally fronted by a distributed main memory cache. Our results show that for higher skewed workloads the anti-caching architecture has a performance advantage over either of the other architectures tested of up to 9⇥ for a data size 8⇥ larger than memory.",
"title": ""
},
{
"docid": "52212ff3e1c85b5f5c3fcf0ec71f6f8b",
"text": "Embodied cognition theory proposes that individuals' abstract concepts can be associated with sensorimotor processes. The authors examined the effects of teaching participants novel embodied metaphors, not based in prior physical experience, and found evidence suggesting that they lead to embodied simulation, suggesting refinements to current models of embodied cognition. Creating novel embodiments of abstract concepts in the laboratory may be a useful method for examining mechanisms of embodied cognition.",
"title": ""
},
{
"docid": "f9af6cca7d9ac18ace9bc6169b4393cc",
"text": "Metric learning has become a widespreadly used tool in machine learning. To reduce expensive costs brought in by increasing dimensionality, low-rank metric learning arises as it can be more economical in storage and computation. However, existing low-rank metric learning algorithms usually adopt nonconvex objectives, and are hence sensitive to the choice of a heuristic low-rank basis. In this paper, we propose a novel low-rank metric learning algorithm to yield bilinear similarity functions. This algorithm scales linearly with input dimensionality in both space and time, therefore applicable to highdimensional data domains. A convex objective free of heuristics is formulated by leveraging trace norm regularization to promote low-rankness. Crucially, we prove that all globally optimal metric solutions must retain a certain low-rank structure, which enables our algorithm to decompose the high-dimensional learning task into two steps: an SVD-based projection and a metric learning problem with reduced dimensionality. The latter step can be tackled efficiently through employing a linearized Alternating Direction Method of Multipliers. The efficacy of the proposed algorithm is demonstrated through experiments performed on four benchmark datasets with tens of thousands of dimensions.",
"title": ""
},
{
"docid": "35dd6675e287b5e364998ee138677032",
"text": "Focussed structured document retrieval employs the concept of best entry points (BEPs), which are intended to provide optimal starting-points from which users can browse to relevant document components. This paper describes two small-scale studies, using experimental data from the Shakespeare user study, which developed and evaluated different approaches to the problem of automatic identification of BEPs.",
"title": ""
},
{
"docid": "6c7bf63f9394bf5432f67b5e554743ae",
"text": "419 INTRODUCTION A team from APL has been using model-based systems engineering (MBSE) methods within a conceptual modeling process to support and unify activities related to system-of-systems architecture development; modeling, simulation, and analysis efforts; and system capability trade studies. These techniques have been applied to support analysis of complex systems, particularly in the net-centric operations and warfare domain, which has proven particularly challenging to the modeling, simulation, and analysis community because of its complexity, information richness, and broad scope. In particular, the APL team has used MBSE techniques to provide structured models of complex systems incorporating input from multiple diverse stakeholders odel-based systems engineering techniques facilitate complex system design and documentation processes. A rigorous, iterative conceptual development process based on the Unified Modeling Language (UML) or the Systems Modeling Language (SysML) and consisting of domain modeling, use case development, and behavioral and structural modeling supports design, architecting, analysis, modeling and simulation, test and evaluation, and program management activities. The resulting model is more useful than traditional documentation because it represents structure, data, and functions, along with associated documentation, in a multidimensional, navigable format. Beyond benefits to project documentation and stakeholder communication, UMLand SysML-based models also support direct analysis methods, such as functional thread extraction. The APL team is continuing to develop analysis techniques using conceptual models to reduce the risk of design and test errors, reduce costs, and improve the quality of analysis and supporting modeling and simulation activities in the development of complex systems. Model-Based Systems Engineering in Support of Complex Systems Development",
"title": ""
},
{
"docid": "12cac87e781307224db2c3edf0d217b8",
"text": "Fetal ventriculomegaly (VM) refers to the enlargement of the cerebral ventricles in utero. It is associated with the postnatal diagnosis of hydrocephalus. VM is clinically diagnosed on ultrasound and is defined as an atrial diameter greater than 10 mm. Because of the anatomic detailed seen with advanced imaging, VM is often further characterized by fetal magnetic resonance imaging (MRI). Fetal VM is a heterogeneous condition with various etiologies and a wide range of neurodevelopmental outcomes. These outcomes are heavily dependent on the presence or absence of associated anomalies and the direct cause of the ventriculomegaly rather than on the absolute degree of VM. In this review article, we discuss diagnosis, work-up, counseling, and management strategies as they relate to fetal VM. We then describe imaging-based research efforts aimed at using prenatal data to predict postnatal outcome. Finally, we review the early experience with fetal therapy such as in utero shunting, as well as the advances in prenatal diagnosis and fetal surgery that may begin to address the limitations of previous therapeutic efforts.",
"title": ""
},
{
"docid": "910b955d0d290e90fe207418b5601019",
"text": "We propose a branch flow model for the analysis and optimization of mesh as well as radial networks. The model leads to a new approach to solving optimal power flow (OPF) that consists of two relaxation steps. The first step eliminates the voltage and current angles and the second step approximates the resulting problem by a conic program that can be solved efficiently. For radial networks, we prove that both relaxation steps are always exact, provided there are no upper bounds on loads. For mesh networks, the conic relaxation is always exact but the angle relaxation may not be exact, and we provide a simple way to determine if a relaxed solution is globally optimal. We propose convexification of mesh networks using phase shifters so that OPF for the convexified network can always be solved efficiently for an optimal solution. We prove that convexification requires phase shifters only outside a spanning tree of the network and their placement depends only on network topology, not on power flows, generation, loads, or operating constraints. Part I introduces our branch flow model, explains the two relaxation steps, and proves the conditions for exact relaxation. Part II describes convexification of mesh networks, and presents simulation results.",
"title": ""
},
{
"docid": "38f386546b5f866d45ff243599bd8305",
"text": "During the last two decades, Structural Equation Modeling (SEM) has evolved from a statistical technique for insiders to an established valuable tool for a broad scientific public. This class of analyses has much to offer, but at what price? This paper provides an overview on SEM, its underlying ideas, potential applications and current software. Furthermore, it discusses avoidable pitfalls as well as built-in drawbacks in order to lend support to researchers in deciding whether or not SEM should be integrated into their research tools. Commented findings of an internet survey give a “State of the Union Address” on SEM users and usage. Which kinds of models are preferred? Which software is favoured in current psychological research? In order to assist the reader on his first steps, a SEM first-aid kit is included. Typical problems and possible solutions are addressed, helping the reader to get the support he needs. Hence, the paper may assist the novice on the first steps and self-critically reminds the advanced reader of the limitations of Structural Equation Modeling",
"title": ""
},
{
"docid": "4a7bd38fcdcaa91cba875cecb8b7c7bd",
"text": "The aim of Search Based Software Engineering (SBSE) research is to move software engineering problems from human-based search to machine-based search, using a variety of techniques from the metaheuristic search, operations research and evolutionary computation paradigms. The idea is to exploit humans’ creativity and machines’ tenacity and reliability, rather than requiring humans to perform the more tedious, error prone and thereby costly aspects of the engineering process. SBSE can also provide insights and decision support. This tutorial will present the reader with a step-by-step guide to the application of SBSE techniques to Software Engineering. It assumes neither previous knowledge nor experience with Search Based Optimisation. The intention is that the tutorial will cover sufficient material to allow the reader to become productive in successfully applying search based optimisation to a chosen Software Engineering problem of interest.",
"title": ""
}
] |
scidocsrr
|
6d4a26f8b20b0882a2d580ef4a2e1c64
|
*-Box: Towards Reliability and Consistency in Dropbox-like File Synchronization Services
|
[
{
"docid": "1f976185517cf009b23a2600400af938",
"text": "We present a study of the effects of disk and memory corruption on file system data integrity. Our analysis focuses on Sun’s ZFS, a modern commercial offering with numerous reliability mechanisms. Through careful and thorough fault injection, we show that ZFS is robust to a wide range of disk faults. We further demonstrate that ZFS is less resilient to memory corruption, which can lead to corrupt data being returned to applications or system crashes. Our analysis reveals the importance of considering both memory and disk in the construction of truly robust file and storage systems.",
"title": ""
}
] |
[
{
"docid": "4e6216d5c3d428018d72542c3d1e5875",
"text": "The recent considerable growth in the amount of easily available on-line text has brought to the foreground the need for large-scale natural language processing tools for text data mining. In this paper we address the problem of organizing documents into meaningful groups according to their content and to visualize a text collection, providing an overview of the range of documents and of their relationships, so that they can be browsed more easily. We use SelfOrganizing Maps (SOMs) (Kohonen 1984). Great efficiency challenges arise in creating these maps. We study linguistically-motivated ways of reducing the representation of a document to increase efficiency and ways to disambiguate the words in the documents.",
"title": ""
},
{
"docid": "cf97c276a503968d849f45f4d1614bfd",
"text": "Social network platforms can archive data produced by their users. Then, the archived data is used to provide better services to the users. One of the services that these platforms provide is the recommendation service. Recommendation systems can predict the future preferences of users using various different techniques. One of the most popular technique for recommendation is matrix-factorization, which uses lowrank approximation of input data. Similarly, word embedding methods from natural language processing literature learn lowdimensional vector space representation of input elements. Noticing the similarities among word embedding and matrix factorization techniques and based on the previous works that apply techniques from text processing to recommendation, Word2Vec’s skip-gram technique is employed to make recommendations. The aim of this work is to make recommendation on next check-in venues. Unlike previous works that use Word2Vec for recommendation, in this work non-textual features are used. For the experiments, a Foursquare check-in dataset is used. The results show that use of vector space representations of items modeled by skip-gram technique is promising for making recommendations. Keywords—Recommendation systems, Location based social networks, Word embedding, Word2Vec, Skip-gram technique",
"title": ""
},
{
"docid": "5ed8c1b7efa827d9efcd537cd831142c",
"text": "The fundamental role of the software defined networks (SDNs) is to decouple the data plane from the control plane, thus providing a logically centralized visibility of the entire network to the controller. This enables the applications to innovate through network programmability. To establish a centralized visibility, a controller is required to discover a network topology of the entire SDN infrastructure. However, discovering a network topology is challenging due to: 1) the frequent migration of the virtual machines in the data centers; 2) lack of authentication mechanisms; 3) scarcity of the SDN standards; and 4) integration of security mechanisms for the topology discovery. To this end, in this paper, we present a comprehensive survey of the topology discovery and the associated security implications in SDNs. This survey provides discussions related to the possible threats relevant to each layer of the SDN architecture, highlights the role of the topology discovery in the traditional network and SDN, presents a thematic taxonomy of topology discovery in SDN, and provides insights into the potential threats to the topology discovery along with its state-of-the-art solutions in SDN. Finally, this survey also presents future challenges and research directions in the field of SDN topology discovery.",
"title": ""
},
{
"docid": "432ff163e4dded948aa5a27aa440cd30",
"text": "Eighty-one female and sixty-seven male undergraduates at a Malaysian university, from seven faculties and a Center for Language Studies completed a Computer Self-Efficacy Scale, Computer Anxiety Scale, and an Attitudes toward the Internet Scale and give information about their use of the Internet. This survey research investigated undergraduates’ computer anxiety, computer self-efficacy, and reported use of and attitudes toward the Internet. This study also examined differences in computer anxiety, computer selfefficacy, attitudes toward the Internet and reported use of the Internet for undergraduates with different demographic variables. The findings suggest that the undergraduates had moderate computer anxiousness, medium attitudes toward the Internet, and high computer self-efficacy and used the Internet extensively for educational purposes such as doing research, downloading electronic resources and e-mail communications. This study challenges the long perceived male bias in the computer environment and supports recent studies that have identified greater gender equivalence in interest, use, and skills levels. However, there were differences in undergraduates’ Internet usage levels based on the discipline of study. Furthermore, higher levels of Internet usage did not necessarily translate into better computer self-efficacy among the undergraduates. A more important factor in determining computer self-efficacy could be the discipline of study and undergraduates studying computer related disciplines appeared to have higher self-efficacy towards computers and the Internet. Undergraduates who used the Internet more often may not necessarily feel more comfortable using them. Possibly, other factors such as the types of application used, the purpose for using, and individual satisfaction could also influence computer self-efficacy and computer anxiety. However, although Internet usage levels may not have any impact on computer self-efficacy, higher usage of the Internet does seem to decrease the levels of computer anxiety among the undergraduates. Undergraduates with lower computer anxiousness demonstrated more positive attitudes toward the Internet in this study.",
"title": ""
},
{
"docid": "c7eceedbb7c6665dca1db772a22452dc",
"text": "This paper proposes a quadruped walking robot that has high performance as a working machine. This robot is needed for various tasks controlled by tele-operation, especially for humanitarian mine detection and removal. Since there are numerous personnel landmines that are still in place from many wars, it is desirable to provide a safe and inexpensive tool that civilians can use to remove those mines. The authors have been working on the concept of the humanitarian demining robot systems for 4 years and have performed basic experiments with the rst prototype VK-I using the modi ed quadruped walking robot, TITAN-VIII. After those experiments, it was possible to re ne some concepts and now the new robot has a tool (end-effector)changing system on its back, so that by utilizing the legs as manipulation arms and connecting various tools to the foot, it can perform mine detection and removal tasks. To accomplish these tasks, we developed various end-effectors that can be attached to the working leg. In this paper we will discuss the mechanical design of the new walking robot called TITAN-IX to be applied to the new system VK-II.",
"title": ""
},
{
"docid": "af6b0d1f5f3938c0912dccbe43a4a88b",
"text": "The mean body size of limnetic cladocerans decreases from cold temperate to tropical regions, in both the northern and the southern hemisphere. This size shift has been attributed to both direct (e.g. physiological) or indirect (especially increased predation) impacts. To provide further information on the role of predation, we compiled results from several studies of subtropical Uruguayan lakes using three different approaches: (i) field observations from two lakes with contrasting fish abundance, Lakes Rivera and Rodó, (ii) fish exclusion experiments conducted in in-lake mesocosms in three lakes, and (iii) analyses of the Daphnia egg bank in the surface sediment of eighteen lakes. When fish predation pressure was low due to fish kills in Lake Rivera, large-bodied Daphnia appeared. In contrast, small-sized cladocerans were abundant in Lake Rodó, which exhibited a typical high abundance of fish. Likewise, relatively large cladocerans (e.g. Daphnia and Simocephalus) appeared in fishless mesocosms after only 2 weeks, most likely hatched from resting egg banks stored in the surface sediment, but their abundance declined again after fish stocking. Moreover, field studies showed that 9 out of 18 Uruguayan shallow lakes had resting eggs of Daphnia in their surface sediment despite that this genus was only recorded in three of the lakes in summer water samples, indicating that Daphnia might be able to build up populations at low risk of predation. Our results show that medium and large-sized zooplankton can occur in subtropical lakes when fish predation is removed. The evidence provided here collectively confirms the hypothesis that predation, rather than high-temperature induced physiological constraints, is the key factor determining the dominance of small-sized zooplankton in warm lakes.",
"title": ""
},
{
"docid": "3b80d6b7cd4b9b0225cff5a4466bb390",
"text": "A large number of objectives have been proposed to train latent variable generative models. We show that many of them are Lagrangian dual functions of the same primal optimization problem. The primal problem optimizes the mutual information between latent and visible variables, subject to the constraints of accurately modeling the data distribution and performing correct amortized inference. By choosing to maximize or minimize mutual information, and choosing different Lagrange multipliers, we obtain different objectives including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, beta-VAE, adversarial autoencoders, AVB, AS-VAE and InfoVAE. Based on this observation, we provide an exhaustive characterization of the statistical and computational trade-offs made by all the training objectives in this class of Lagrangian duals. Next, we propose a dual optimization method where we optimize model parameters as well as the Lagrange multipliers. This method achieves Pareto optimal solutions in terms of optimizing information and satisfying the constraints.",
"title": ""
},
{
"docid": "d88523afba42431989f5d3bd22f2ad85",
"text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.",
"title": ""
},
{
"docid": "73267467deec2701d6628a0d3572132e",
"text": "Neuromyelitis optica (NMO) is an inflammatory CNS syndrome distinct from multiple sclerosis (MS) that is associated with serum aquaporin-4 immunoglobulin G antibodies (AQP4-IgG). Prior NMO diagnostic criteria required optic nerve and spinal cord involvement but more restricted or more extensive CNS involvement may occur. The International Panel for NMO Diagnosis (IPND) was convened to develop revised diagnostic criteria using systematic literature reviews and electronic surveys to facilitate consensus. The new nomenclature defines the unifying term NMO spectrum disorders (NMOSD), which is stratified further by serologic testing (NMOSD with or without AQP4-IgG). The core clinical characteristics required for patients with NMOSD with AQP4-IgG include clinical syndromes or MRI findings related to optic nerve, spinal cord, area postrema, other brainstem, diencephalic, or cerebral presentations. More stringent clinical criteria, with additional neuroimaging findings, are required for diagnosis of NMOSD without AQP4IgG or when serologic testing is unavailable. The IPND also proposed validation strategies and achieved consensus on pediatric NMOSD diagnosis and the concepts of monophasic NMOSD and opticospinal MS. Neurology® 2015;85:1–13 GLOSSARY ADEM 5 acute disseminated encephalomyelitis; AQP4 5 aquaporin-4; IgG 5 immunoglobulin G; IPND 5 International Panel for NMO Diagnosis; LETM 5 longitudinally extensive transverse myelitis lesions; MOG 5 myelin oligodendrocyte glycoprotein; MS 5 multiple sclerosis; NMO 5 neuromyelitis optica; NMOSD 5 neuromyelitis optica spectrum disorders; SLE 5 systemic lupus erythematosus; SS 5 Sjögren syndrome. Neuromyelitis optica (NMO) is an inflammatory CNS disorder distinct from multiple sclerosis (MS). It became known as Devic disease following a seminal 1894 report. Traditionally, NMO was considered a monophasic disorder consisting of simultaneous bilateral optic neuritis and transverse myelitis but relapsing cases were described in the 20th century. MRI revealed normal brain scans and$3 vertebral segment longitudinally extensive transverse myelitis lesions (LETM) in NMO. The nosology of NMO, especially whether it represented a topographically restricted form of MS, remained controversial. A major advance was the discovery that most patients with NMO have detectable serum antibodies that target the water channel aquaporin-4 (AQP4–immunoglobulin G [IgG]), are highly specific for clinically diagnosed NMO, and have pathogenic potential. In 2006, AQP4-IgG serology was incorporated into revised NMO diagnostic criteria that relaxed clinical From the Departments of Neurology (D.M.W.) and Library Services (K.E.W.), Mayo Clinic, Scottsdale, AZ; the Children’s Hospital of Philadelphia (B.B.), PA; the Departments of Neurology and Ophthalmology (J.L.B.), University of Colorado Denver, Aurora; the Service de Neurologie (P.C.), Centre Hospitalier Universitaire de Fort de France, Fort-de-France, Martinique; Department of Neurology (W.C.), Sir Charles Gairdner Hospital, Perth, Australia; the Department of Neurology (T.C.), Massachusetts General Hospital, Boston; the Department of Neurology (J.d.S.), Strasbourg University, France; the Department of Multiple Sclerosis Therapeutics (K.F.), Tohoku University Graduate School of Medicine, Sendai, Japan; the Departments of Neurology and Neurotherapeutics (B.G.), University of Texas Southwestern Medical Center, Dallas; The Walton Centre NHS Trust (A.J.), Liverpool, UK; the Molecular Neuroimmunology Group, Department of Neurology (S.J.), University Hospital Heidelberg, Germany; the Center for Multiple Sclerosis Investigation (M.L.-P.), Federal University of Minas Gerais Medical School, Belo Horizonte, Brazil; the Department of Neurology (M.L.), Johns Hopkins University, Baltimore, MD; Portland VA Medical Center and Oregon Health and Sciences University (J.H.S.), Portland; the Department of Neurology (S.T.), National Pediatric Hospital Dr. Juan P. Garrahan, Buenos Aires, Argentina; the Department of Medicine (A.L.T.), University of British Columbia, Vancouver, Canada; Nuffield Department of Clinical Neurosciences (P.W.), University of Oxford, UK; and the Department of Neurology (B.G.W.), Mayo Clinic, Rochester, MN. Go to Neurology.org for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. The Article Processing Charge was paid by the Guthy-Jackson Charitable Foundation. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially. © 2015 American Academy of Neurology 1 a 2015 American Academy of Neurology. Unauthorized reproduction of this article is prohibited. Published Ahead of Print on June 19, 2015 as 10.1212/WNL.0000000000001729",
"title": ""
},
{
"docid": "beea84b0d96da0f4b29eabf3b242a55c",
"text": "Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment, CRYSTAL ISLAND.",
"title": ""
},
{
"docid": "7670b1eea992a1e83d3ebc1464563d60",
"text": "The present work was conducted to demonstrate a method that could be used to assess the hypothesis that children with specific language impairment (SLI) often respond more slowly than unimpaired children on a range of tasks. The data consisted of 22 pairs of mean response times (RTs) obtained from previously published studies; each pair consisted of a mean RT for a group of children with SLI for an experimental condition and the corresponding mean RT for a group of children without SLI. If children with SLI always respond more slowly than unimpaired children and by an amount that does not vary across tasks, then RTs for children with SLI should increase linearly as a function of RTs for age-matched control children without SLI. This result was obtained and is consistent with the view that differences in processing speed between children with and without SLI reflect some general (i.e., non-task specific) component of cognitive processing. Future applications of the method are suggested.",
"title": ""
},
{
"docid": "89a44c05597cf4e88936ee2f376f8847",
"text": "Online social networking sites have become increasingly popular over the last few years. As a result, new interdisciplinary research directions have emerged in which social network analysis methods are applied to networks containing hundreds millions of users. Unfortunately, links between individuals may be missing due to imperfect acquirement processes or because they are not yet reflected in the online network (i.e., friends in real world did not form a virtual connection.) Existing link prediction techniques lack the scalability required for full application on a continuously growing social network which may be adding everyday users with thousands of connections. The primary bottleneck in link prediction techniques is extracting structural features required for classifying links. In this paper we propose a set of simple, easy-to-compute structural features that can be analyzed to identify missing links. We show that a machine learning classifier trained using the proposed simple structural features can successfully identify missing links even when applied to a hard problem of classifying links between individuals who have at least one common friend. A new friends measure that we developed is shown to be a good predictor for missing links and an evaluation experiment was performed on five large social networks datasets: Face book, Flickr, You Tube, Academia and The Marker. Our methods can provide social network site operators with the capability of helping users to find known, offline contacts and to discover new friends online. They may also be used for exposing hidden links in an online social network.",
"title": ""
},
{
"docid": "ae1705c0b7be3c218c1fcb42cc53ea9a",
"text": "We examine the relation between executive compensation and corporate fraud. Executives at fraud firms have significantly larger equity-based compensation and greater financial incentives to commit fraud than do executives at industryand sizematched control firms. Executives at fraud firms also earn significantly more total compensation by exercising significantly larger fractions of their vested options than the control executives during the fraud years. Operating and stock performance measures suggest executives who commit corporate fraud attempt to offset declines in performance that would otherwise occur. Our results imply that optimal governance measures depend on the strength of executives’ financial incentives.",
"title": ""
},
{
"docid": "1ff9bf5a5a511a159cc1cc3623ad7f0a",
"text": "This paper illustrates the rectifier stress issue of the active clamped dual switch forward converters operating on discontinuous current mode (DCM), and analyzes the additional reverse voltage on the rectifier diode of active clamped dual switch forward converter at DCM operation, which does not appear in continuous current mode (CCM). The additional reverse voltage stress, plus its spikes, definitely causes many difficulties in designing high performance power supplies. In order to suppress this voltage spike to an acceptable level and improve the working conditions for the rectifier diode, this paper carefully explains and presents the working principles of active clamped dual switch forward converter in DCM operation, and theoretically analyzes the causes of the additional reverse voltage and its spikes. For conquering these difficulties, this paper also innovate active clamped snubber (ACS) cell to solve this issue. Furthermore, experiments on a 270W active clamped dual switch forward converter prototype were designed to validate the innovation. Finally, based on the similarities of the rectifier network in forward-topology based converters, this paper also extents the utility of this idea into even wider dc-dc converters.",
"title": ""
},
{
"docid": "0e37a1a251c97fd88aa2ab3ee9ed422b",
"text": "k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.",
"title": ""
},
{
"docid": "95dfb99d65e01f4cfa71a7d12a89f3df",
"text": "The goal of surgery for anorectal malformations (ARM) is to achieve good bowel, urinary, and sexual functions, as well as the ability for children to become healthy adults. Various surgical procedures and surgical management protocols have been explored or devised by pediatric surgeons. These are described in this review. Making a correct type classification by invertography, fistelography and urethrography in the neonatal period allows pediatric surgeons to select an appropriate surgical strategy. Surgery for low-type malformations is principally neonatal perineoplasty, while that for intermediate- or high-type malformations is colostomy, followed by a pull-through operation during infancy. Posterior sagittal anorectoplasty or laparoscopy-assisted surgery has recently been accepted as alternative procedures. Fecal incontinence represents a devastating problem that often prevents a patient from becoming socially accepted and may cause serious psychological sequelae. One-third of adult patients with high- or intermediate-type malformations occasionally complain of fecal incontinence after surgery. Most patients with ARM have normal urinary function if they do not have urinary tract or sacral anomalies. These associated anomalies also influence the prognosis for sexual function, especially in males. Some female patients have experienced normal vaginal delivery and had children. In patients with cloacal malformation, however, fertility or sexual problems are also often present. Based on this information, it is clear that only well-planned and systemic treatments can provide a good functional prognosis after making a correct classification in the neonatal period.",
"title": ""
},
{
"docid": "cc4e8c21e58a8b26bf901b597d0971d8",
"text": "Pedestrian detection and semantic segmentation are high potential tasks for many real-time applications. However most of the top performing approaches provide state of art results at high computational costs. In this work we propose a fast solution for achieving state of art results for both pedestrian detection and semantic segmentation. As baseline for pedestrian detection we use sliding windows over cost efficient multiresolution filtered LUV+HOG channels. We use the same channels for classifying pixels into eight semantic classes. Using short range and long range multiresolution channel features we achieve more robust segmentation results compared to traditional codebook based approaches at much lower computational costs. The resulting segmentations are used as additional semantic channels in order to achieve a more powerful pedestrian detector. To also achieve fast pedestrian detection we employ a multiscale detection scheme based on a single flexible pedestrian model and a single image scale. The proposed solution provides competitive results on both pedestrian detection and semantic segmentation benchmarks at 8 FPS on CPU and at 15 FPS on GPU, being the fastest top performing approach.",
"title": ""
},
{
"docid": "198b084248ea03fb1398df036db800bf",
"text": "Assistive technology (AT) is defined in this paper as ‘any device or system that allows an individual to perform a task that they would otherwise be unable to do, or increases the ease and safety with which the task can be performed’ (Cowan and Turner-Smith 1999). Its importance in contributing to older people’s independence and autonomy is increasingly recognised, but there has been little research into the viability of extensive installations of AT. This paper focuses on the acceptability of AT to older people, and reports one component of a multidisciplinary research project that examined the feasibility, acceptability, costs and outcomes of introducing AT into their homes. Sixty-seven people aged 70 or more years were interviewed in-depth during 2001 to find out about their use and experience of a wide range of assistive technologies. The findings suggest a complex model of acceptability, in which a ‘ felt need’ for assistance combines with ‘product quality ’. The paper concludes by considering the tensions that may arise in the delivery of acceptable assistive technology.",
"title": ""
},
{
"docid": "e029a189f85f9cb47a5ad0a766efad1d",
"text": "\"Next generation\" data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines.\n In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28x speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity \"big data\" systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8--8.9x improvement over the state-of-the-art MPI-based system.",
"title": ""
},
{
"docid": "737dfbd7637337c294ee70c05c62acb1",
"text": "T he Pirogoff amputation, removal of the forefoot and talus followed by calcaneotibial arthrodesis, produces a lower extremity with a minimum loss of length that is capable of bearing full weight. Although the technique itself is not new, patients who have already undergone amputation of the contralateral leg may benefit particularly from this littleused amputation. Painless weight-bearing is essential for the patient who needs to retain the ability to make indoor transfers independently of helpers or a prosthesis. As the number of patients with peripheral vascular disease continues to increase, this amputation should be in the armamentarium of the treating orthopaedic surgeon. Our primary indication for a Pirogoff amputation is a forefoot lesion that is too extensive for reconstruction or nonoperative treatment because of gangrene or infection, as occurs in patients with diabetes or arteriosclerosis. Other causes, such as trauma, malignancy, osteomyelitis, congenital abnormalities, and rare cases of frostbite, are also considered. To enhance the success rate, we only perform surgery if four criteria are met: (1) the blood supply to the soft tissues and the calcaneal region should support healing, (2) there should be no osteomyelitis of the distal part of the tibia or the calcaneus, (3) the heel pad should be clinically viable and painless, and (4) the patient should be able to walk with two prostheses after rehabilitation. Warren mentioned uncontrolled diabetes mellitus, severe Charcot arthropathy of the foot, and smoking as relative contraindications. There are other amputation options. In developed countries, the most common indication for transtibial amputation is arteriosclerosis (>90%). Although the results of revascularization operations and interventional radiology are promising, amputation remains the only option for 40% of all patients with severe ischemia. Various types of amputation of the lower extremity have been described. The advantages and disadvantages have to be considered and discussed with the patient. For the Syme ankle disarticulation, amputation is performed at the level of the talocrural joint and the plantar fat pad is dissected from the calcaneus and is preserved. Woundhealing and proprioception are good, but patients have an inconvenient leg-length discrepancy and in some cases the heel is not pain-free on weight-bearing. Prosthetic fitting can be difficult because of a bulbous distal end or shift of the plantar fat pad. However, the latter complication can be prevented in most cases by anchoring the heel pad to the distal aspect of",
"title": ""
}
] |
scidocsrr
|
ec9d1ea5b46ac338f26de530bc117b04
|
Towards the Internet of Smart Trains: A Review on Industrial IoT-Connected Railways
|
[
{
"docid": "d529d1052fce64ae05fbc64d2b0450ab",
"text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7e647cac9417bf70acd8c0b4ee0faa9b",
"text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.",
"title": ""
}
] |
[
{
"docid": "ba67c3006c6167550bce500a144e63f1",
"text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.",
"title": ""
},
{
"docid": "14508a81494077406b90632d38e09d44",
"text": "During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations.",
"title": ""
},
{
"docid": "8738ec0c6e265f0248d7fa65de4cdd05",
"text": "BACKGROUND\nCaring traditionally has been at the center of nursing. Effectively measuring the process of nurse caring is vital in nursing research. A short, less burdensome dimensional instrument for patients' use is needed for this purpose.\n\n\nOBJECTIVES\nTo derive and validate a shorter Caring Behaviors Inventory (CBI) within the context of the 42-item CBI.\n\n\nMETHODS\nThe responses to the 42-item CBI from 362 hospitalized patients were used to develop a short form using factor analysis. A test-retest reliability study was conducted by administering the shortened CBI to new samples of patients (n = 64) and nurses (n = 42).\n\n\nRESULTS\nFactor analysis yielded a 24-item short form (CBI-24) that (a) covers the four major dimensions assessed by the 42-item CBI, (b) has internal consistency (alpha =.96) and convergent validity (r =.62) similar to the 42-item CBI, (c) reproduces at least 97% of the variance of the 42 items in patients and nurses, (d) provides statistical conclusions similar to the 42-item CBI on scoring for caring behaviors by patients and nurses, (e) has similar sensitivity in detecting between-patient difference in perceptions, (f) obtains good test-retest reliability (r = .88 for patients and r=.82 for nurses), and (g) confirms high internal consistency (alpha >.95) as a stand-alone instrument administered to the new samples.\n\n\nCONCLUSION\nCBI-24 appears to be equivalent to the 42-item CBI in psychometric properties, validity, reliability, and scoring for caring behaviors among patients and nurses. These results recommend the use of CBI-24 to reduce response burden and research costs.",
"title": ""
},
{
"docid": "4253afeaeb2f238339611e5737ed3e06",
"text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.",
"title": ""
},
{
"docid": "c6054c39b9b36b5d446ff8da3716ec30",
"text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "36cd997369a654567f2304070b22638c",
"text": "There has been a recent increase in the prevalence of asthma worldwide; however, the 5-10% of patients with severe disease account for a substantial proportion of the health costs. Although most asthma cases can be satisfactorily managed with a combination of anti-inflammatory drugs and bronchodilators, patients who remain symptomatic despite maximum combination treatment represent a heterogeneous group consisting of those who are under-treated or non-adherent with their prescribed medication. After excluding under-treatment and poor compliance, corticosteroid refractory asthma can be identified as a subphenotype characterised by a heightened neutrophilic airway inflammatory response in the presence or absence of eosinophils, with evidence of increased tissue injury and remodelling. Although a wide range of environmental factors such as allergens, smoking, air pollution, infection, hormones, and specific drugs can contribute to this phenotype, other features associated with changes in the airway inflammatory response should be taken into account. Aberrant communication between an injured airway epithelium and underlying mesenchyme contributes to disease chronicity and refractoriness to corticosteroids. The importance of identifying underlying causative factors and the recent introduction of novel therapeutic approaches, including the targeting of immunoglobulin E and tumour necrosis factor alpha with biological agents, emphasise the need for careful phenotyping of patients with severe disease to target improved management of the individual patient's needs.",
"title": ""
},
{
"docid": "49c4137c763c2f9bb48b2b95ace9623a",
"text": "Multi-relational data, like knowledge graphs, are generated from multiple data sources by extracting entities and their relationships. We often want to include inferred, implicit or likely relationships that are not explicitly stated, which can be viewed as link-prediction in a graph. Tensor decomposition models have been shown to produce state-of-the-art results in link-prediction tasks. We describe a simple but novel extension to an existing tensor decomposition model to predict missing links using similarity among tensor slices, as opposed to an existing tensor decomposition models which assumes each slice to contribute equally in predicting links. Our extended model performs better than the original tensor decomposition and the non-negative tensor decomposition variant of it in an evaluation on several datasets.",
"title": ""
},
{
"docid": "2f20f587bb46f7133900fd8c22cea3ab",
"text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.",
"title": ""
},
{
"docid": "f12749ba8911e8577fbde2327c9dc150",
"text": "Regardless of successful applications of the convolutional neural networks (CNNs) in different fields, its application to seismic waveform classification and first-break (FB) picking has not been explored yet. This letter investigates the application of CNNs for classifying time-space waveforms from seismic shot gathers and picking FBs of both direct wave and refracted wave. We use representative subimage samples with two types of labeled waveform classification to supervise CNNs training. The goal is to obtain the optimal weights and biases in CNNs, which are solved by minimizing the error between predicted and target label classification. The trained CNNs can be utilized to automatically extract a set of time-space attributes or features from any subimage in shot gathers. These attributes are subsequently inputted to the trained fully connected layer of CNNs to output two values between 0 and 1. Based on the two-element outputs, a discriminant score function is defined to provide a single indication for classifying input waveforms. The FB is then located from the calculated score maps by sequentially using a threshold, the first local minimum rule of every trace and a median filter. Finally, we adopt synthetic and real shot data examples to demonstrate the effectiveness of CNNs-based waveform classification and FB picking. The results illustrate that CNN is an efficient automatic data-driven classifier and picker.",
"title": ""
},
{
"docid": "1ee679d237c54dd8aaaeb2383d6b49fa",
"text": "Bike sharing systems (BSSs) have become common in many cities worldwide, providing a new transportation mode for residents' commutes. However, the management of these systems gives rise to many problems. As the bike pick-up demands at different places are unbalanced at times, the systems have to be rebalanced frequently. Rebalancing the bike availability effectively, however, is very challenging as it demands accurate prediction for inventory target level determination. In this work, we propose two types of regression models using multi-source data to predict the hourly bike pick-up demand at cluster level: Similarity Weighted K-Nearest-Neighbor (SWK) based regression and Artificial Neural Network (ANN). SWK-based regression models learn the weights of several meteorological factors and/or taxi usage and use the correlation between consecutive time slots to predict the bike pick-up demand. The ANN is trained by using historical trip records of BSS, meteorological data, and taxi trip records. Our proposed methods are tested with real data from a New York City BSS: Citi Bike NYC. Performance comparison between SWK-based and ANN-based methods is provided. Experimental results indicate the high accuracy of ANN-based prediction for bike pick-up demand using multisource data.",
"title": ""
},
{
"docid": "eed8fd39830e8058d55427623bb655df",
"text": "In this paper, we present a solution for main content identification in web pages. Our solution is language-independent; Web pages may be written in different languages. It is topic-independent; no domain knowledge or dictionary is applied. And it is unsupervised; no training phase is necessary. The solution exploits the tree structure of web pages and the frequencies of text tokens to attribute scores of content density to the areas of the page and by the way identify the most important one. We tested this solution over representative examples of web pages to show how efficient and accurate it is. The results were satisfying.",
"title": ""
},
{
"docid": "11f47bb575a6e50c3d3ccef0e75ff3b9",
"text": "Corporate social responsibility is incorporated into strategic management at the enterprise strategy level. This paper delineates the domain of enterprise strategy by focusing on how well a firm's social performance matches its competences and stakeholders rather than on the \"quantity\" of a firm's social responsibility. Enterprise strategy is defined and a classification of enterprise strategies is set forth.",
"title": ""
},
{
"docid": "197797b3bb51791a5986d0ee0ea04d2b",
"text": "Energy harvesting for wireless communication networks is a new paradigm that allows terminals to recharge their batteries from external energy sources in the surrounding environment. A promising energy harvesting technology is wireless power transfer where terminals harvest energy from electromagnetic radiation. Thereby, the energy may be harvested opportunistically from ambient electromagnetic sources or from sources that intentionally transmit electromagnetic energy for energy harvesting purposes. A particularly interesting and challenging scenario arises when sources perform simultaneous wireless information and power transfer (SWIPT), as strong signals not only increase power transfer but also interference. This article provides an overview of SWIPT systems with a particular focus on the hardware realization of rectenna circuits and practical techniques that achieve SWIPT in the domains of time, power, antennas, and space. The article also discusses the benefits of a potential integration of SWIPT technologies in modern communication networks in the context of resource allocation and cooperative cognitive radio networks.",
"title": ""
},
{
"docid": "ad2d21232d8a9af42ea7339574739eb3",
"text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"title": ""
},
{
"docid": "7e6e2d5fabb642fbb089c3e0c2f04921",
"text": "Computer vision is one of the most active research fields in information technology today. Giving machines and robots the ability to see and comprehend the surrounding world at the speed of sight creates endless potential applications and opportunities. Feature detection and description algorithms can be indeed considered as the retina of the eyes of such machines and robots. However, these algorithms are typically computationally intensive, which prevents them from achieving the speed of sight real-time performance. In addition, they differ in their capabilities and some may favor and work better given a specific type of input compared to others. As such, it is essential to compactly report their pros and cons as well as their performances and recent advances. This paper is dedicated to provide a comprehensive overview on the state-of-the-art and recent advances in feature detection and description algorithms. Specifically, it starts by overviewing fundamental concepts. It then compares, reports and discusses their performance and capabilities. The Maximally Stable Extremal Regions algorithm and the Scale Invariant Feature Transform algorithms, being two of the best of their type, are selected to report their recent algorithmic derivatives.",
"title": ""
},
{
"docid": "9586a8e41ca84dbb71c3764c88753efb",
"text": "Indoor wireless systems often operate under non-line-of-sight (NLOS) conditions that can cause ranging errors for location-based applications. As such, these applications could benefit greatly from NLOS identification and mitigation techniques. These techniques have been primarily investigated for ultra-wide band (UWB) systems, but little attention has been paid to WiFi systems, which are far more prevalent in practice. In this study, we address the NLOS identification and mitigation problems using multiple received signal strength (RSS) measurements from WiFi signals. Key to our approach is exploiting several statistical features of the RSS time series, which are shown to be particularly effective. We develop and compare two algorithms based on machine learning and a third based on hypothesis testing to separate LOS/NLOS measurements. Extensive experiments in various indoor environments show that our techniques can distinguish between LOS/NLOS conditions with an accuracy of around 95%. Furthermore, the presented techniques improve distance estimation accuracy by 60% as compared to state-of-the-art NLOS mitigation techniques. Finally, improvements in distance estimation accuracy of 50% are achieved even without environment-specific training data, demonstrating the practicality of our approach to real world implementations.",
"title": ""
},
{
"docid": "e8edd727e923595acc80df364bfc64af",
"text": "Context: Architecture-centric software evolution (ACSE) enables changes in system’s structure and behaviour while maintaining a global view of the software to address evolution-centric trade-offs. The existing research and practices for ACSE primarily focus on design-time evolution and runtime adaptations to accommodate changing requirements in existing architectures. Objectives: We aim to identify, taxonomically classify and systematically compare the existing research focused on enabling or enhancing change reuse to support ACSE. Method: We conducted a systematic literature review of 32 qualitatively selected studies and taxonomically classified these studies based on solutions that enable (i) empirical acquisition and (ii) systematic application of architecture evolution reuse knowledge (AERK) to guide ACSE. Results: We identified six distinct research themes that support acquisition and application of AERK. We investigated (i) how evolution reuse knowledge is defined, classified and represented in the existing research to support ACSE and (ii) what are the existing methods, techniques and solutions to support empirical acquisition and systematic application of AERK. Conclusions: Change patterns (34% of selected studies) represent a predominant solution, followed by evolution styles (25%) and adaptation strategies and policies (22%) to enable application of reuse knowledge. Empirical methods for acquisition of reuse knowledge represent 19% including pattern discovery, configuration analysis, evolution and maintenance prediction techniques (approximately 6% each). A lack of focus on empirical acquisition of reuse knowledge suggests the need of solutions with architecture change mining as a complementary and integrated phase for architecture change execution. Copyright © 2014 John Wiley & Sons, Ltd. Received 13 May 2013; Revised 23 September 2013; Accepted 27 December 2013",
"title": ""
},
{
"docid": "f001f2933b3c96fe6954e086488776e0",
"text": "Pd coated copper (PCC) wire and Au-Pd coated copper (APC) wire have been widely used in the field of LSI device. Recently, higher bond reliability at high temperature becomes increasingly important for on-vehicle devices. However, it has been reported that conventional PCC wire caused a bond failure at elevated temperatures. On the other hand, new-APC wire had higher reliability at higher temperature than conventional APC wire. New-APC wire has higher concentration of added element than conventional APC wire. In this paper, failure mechanism of conventional APC wire and improved mechanism of new-APC wire at high temperature were shown. New-APC wire is suitable for onvehicle devices.",
"title": ""
},
{
"docid": "b18c8b7472ba03a260d63b886a6dc11d",
"text": "In this paper, we propose a novel technique for automatic table detection in document images. Lines and tables are among the most frequent graphic, non-textual entities in documents and their detection is directly related to the OCR performance as well as to the document layout description. We propose a workflow for table detection that comprises three distinct steps: (i) image pre-processing; (ii) horizontal and vertical line detection and (iii) table detection. The efficiency of the proposed method is demonstrated by using a performance evaluation scheme which considers a great variety of documents such as forms, newspapers/magazines, scientific journals, tickets/bank cheques, certificates and handwritten documents.",
"title": ""
},
{
"docid": "8b0ac11c05601e93557fe0d5097b4529",
"text": "We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be \"target earners,\" contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of $1.38/hour. We discuss how to use our calibrated model in applications.",
"title": ""
}
] |
scidocsrr
|
cf2b60103a2e3a4e17ed673d0d1448fb
|
Soft Material Characterization for Robotic Applications
|
[
{
"docid": "0b6a766d3e23cd15ba748961a00a569b",
"text": "A novel soft strain sensor capable of withstanding strains of up to 100% is described. The sensor is made of a hyperelastic silicone elastomer that contains embedded microchannels filled with conductive liquids. This is an effort of improving the previously reported soft sensors that uses a single liquid conductor. The proposed sensor employs a hybrid approach involving two liquid conductors: an ionic solution and an eutectic gallium-indium alloy. This hybrid method reduces the sensitivity to noise that may be caused by variations in electrical resistance of the wire interface and undesired stress applied to signal routing areas. The bridge between these two liquids is made conductive by doping the elastomer locally with nickel nanoparticles. The design, fabrication, and characterization of the sensor are presented.",
"title": ""
}
] |
[
{
"docid": "9113e4ba998ec12dd2536073baf40610",
"text": "Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task (\"Deep convolutional neural networks for LVCSR,\" T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).",
"title": ""
},
{
"docid": "29f17b7d7239a2845d513976e4981d6a",
"text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.",
"title": ""
},
{
"docid": "602c176fc4150543f443f0891161b1bb",
"text": "In the wake of a polarizing election, the cyber world is laden with hate speech. Context accompanying a hate speech text is useful for identifying hate speech, which however has been largely overlooked in existing datasets and hate speech detection models. In this paper, we provide an annotated corpus of hate speech with context information well kept. Then we propose two types of hate speech detection models that incorporate context information, a logistic regression model with context features and a neural network model with learning components for context. Our evaluation shows that both models outperform a strong baseline by around 3% to 4% in F1 score and combining these two models further improve the performance by another 7% in F1 score.",
"title": ""
},
{
"docid": "c3d06acdf8b74535fa22ed08420d5433",
"text": "Generative adversarial networks have been shown to generate very realistic images by learning through a min-max game. Furthermore, these models are known to model image spaces more easily when conditioned on class labels. In this work, we consider conditioning on fine-grained textual descriptions, thus also enabling us to produce realistic images that correspond to the input text description. Additionally, we consider the task of learning disentangled representations for images through special latent codes, such that we can move them as knobs to alter the generated image. These latent codes take on very interpretable roles and are learnt in a completely unsupervised manner, using ideas from InfoGAN. We show that the learnt latent codes that encode much more variance and semantic interpretability as compared to standard GANs by experimenting on two datasets.",
"title": ""
},
{
"docid": "d8f7b138124e7b1a251e8bd92e47f35c",
"text": "Autonomous delivery of goods using a Micro Air Vehicle (MAV) is a difficult problem, as it poses high demand on the MAV's control, perception and manipulation capabilities. This problem is especially challenging if the exact shape, location and configuration of the objects are unknown. In this paper, we report our findings during the development and evaluation of a fully integrated system that is energy efficient and enables MAVs to pick up and deliver objects with partly ferrous surface of varying shapes and weights. This is achieved by using a novel combination of an electro-permanent magnetic gripper with a passively compliant structure and integration with detection, control and servo positioning algorithms. The system's ability to grasp stationary and moving objects was tested, as well as its ability to cope with different shapes of the object and external disturbances. We show that such a system can be successfully deployed in scenarios where an object with partly ferrous parts needs to be gripped and placed in a predetermined location.",
"title": ""
},
{
"docid": "110742230132649f178d2fa99c8ffade",
"text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.",
"title": ""
},
{
"docid": "4aca364133eb0630c3b97e69922d07b7",
"text": "Deep learning offers new tools to improve our understanding of many important scientific problems. Neutrinos are the most abundant particles in existence and are hypothesized to explain the matter-antimatter asymmetry that dominates our universe. Definitive tests of this conjecture require a detailed understanding of neutrino interactions with a variety of nuclei. Many measurements of interest depend on vertex reconstruction — finding the origin of a neutrino interaction using data from the detector, which can be represented as images. Traditionally, this has been accomplished by utilizing methods that identify the tracks coming from the interaction. However, these methods are not ideal for interactions where an abundance of tracks and cascades occlude the vertex region. Manual algorithm engineering to handle these challenges is complicated and error prone. Deep learning extracts rich, semantic features directly from raw data, making it a promising solution to this problem. In this work, deep learning models are presented that classify the vertex location in regions meaningful to the domain scientists improving their ability to explore more complex interactions.",
"title": ""
},
{
"docid": "fc4f06fb586de6452337d83bac8f64f3",
"text": "Deep learning techniques have boosted the performance of hyperspectral image (HSI) classification. In particular, convolutional neural networks (CNNs) have shown superior performance to that of the conventional machine learning algorithms. Recently, a novel type of neural networks called capsule networks (CapsNets) was presented to improve the most advanced CNNs. In this paper, we present a modified two-layer CapsNet with limited training samples for HSI classification, which is inspired by the comparability and simplicity of the shallower deep learning models. The presented CapsNet is trained using two real HSI datasets, i.e., the PaviaU (PU) and SalinasA datasets, representing complex and simple datasets, respectively, and which are used to investigate the robustness or representation of every model or classifier. In addition, a comparable paradigm of network architecture design has been proposed for the comparison of CNN and CapsNet. Experiments demonstrate that CapsNet shows better accuracy and convergence behavior for the complex data than the state-of-the-art CNN. For CapsNet using the PU dataset, the Kappa coefficient, overall accuracy, and average accuracy are 0.9456, 95.90%, and 96.27%, respectively, compared to the corresponding values yielded by CNN of 0.9345, 95.11%, and 95.63%. Moreover, we observed that CapsNet has much higher confidence for the predicted probabilities. Subsequently, this finding was analyzed and discussed with probability maps and uncertainty analysis. In terms of the existing literature, CapsNet provides promising results and explicit merits in comparison with CNN and two baseline classifiers, i.e., random forests (RFs) and support vector machines (SVMs).",
"title": ""
},
{
"docid": "4aecf3efd5de0ab468fc1f47d7662357",
"text": "AIM\nThis article presents a discussion of generational differences and their impact on the nursing workforce and how this impact affects the work environment.\n\n\nBACKGROUND\nThe global nursing workforce represents four generations of nurses. This generational diversity frames attitudes, beliefs, work habits and expectations associated with the role of the nurse in the provision of care and in the way the nurse manages their day-to-day activities.\n\n\nDATA SOURCES\nAn electronic search of MEDLINE, PubMed and Cinahl databases was performed using the words generational diversity, nurse managers and workforce. The search was limited to 2000-2012.\n\n\nDISCUSSION\nGenerational differences present challenges to contemporary nurse managers working in a healthcare environment which is complex and dynamic, in terms of managing nurses who think and behave in a different way because of disparate core personal and generational values, namely, the three Cs of communication, commitment and compensation.\n\n\nIMPLICATIONS FOR NURSING\nAn acceptance of generational diversity in the workplace allows a richer scope for practice as the experiences and knowledge of each generation in the nursing environment creates an environment of acceptance and harmony facilitating retention of nurses.\n\n\nCONCLUSION\nAcknowledgement of generational characteristics provides the nurse manager with strategies which focus on mentoring and motivation; communication, the increased use of technology and the ethics of nursing, to bridge the gap between generations of nurses and to increase nursing workforce cohesion.",
"title": ""
},
{
"docid": "bfe58868ab05a6ba607ef1f288d37f33",
"text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.",
"title": ""
},
{
"docid": "58047bd197ebeb760156cc33462c1335",
"text": "We present a nonlinear, dynamic controller for a 6DOF quadrotor operating in an estimated, spatially varying, turbulent wind field. The quadrotor dynamics include the aerodynamic effects of drag, rotor blade flapping, and induced thrust due to translational velocity and external wind fields. To control the quadrotor we use a dynamic input/output feedback linearization controller that estimates a parametric model of the wind field using a recursive Bayesian filter. Each rotor experiences a possibly different wind field, which introduces moments that are accounted for in the controller and allows flight in wind fields that vary over the length of the vehicle. We add noise to the wind field in the form of Dryden turbulence to simulate the algorithm in two applications: autonomous ship landing and quadrotor proximity flight.",
"title": ""
},
{
"docid": "405acd07ad0d1b3b82ada19e85e23ce6",
"text": "Self-driving technology is advancing rapidly — albeit with significant challenges and limitations. This progress is largely due to recent developments in deep learning algorithms. To date, however, there has been no systematic comparison of how different deep learning architectures perform at such tasks, or an attempt to determine a correlation between classification performance and performance in an actual vehicle, a potentially critical factor in developing self-driving systems. Here, we introduce the first controlled comparison of multiple deep-learning architectures in an end-to-end autonomous driving task across multiple testing conditions. We used a simple and affordable platform consisting of an off-the-shelf, remotely operated vehicle, a GPU-equipped computer, and an indoor foamrubber racetrack. We compared performance, under identical driving conditions, across seven architectures including a fully-connected network, a simple 2 layer CNN, AlexNet, VGG-16, Inception-V3, ResNet, and an LSTM by assessing the number of laps each model was able to successfully complete without crashing while traversing an indoor racetrack. We compared performance across models when the conditions exactly matched those in training as well as when the local environment and track were configured differently and objects that were not included in the training dataset were placed on the track in various positions. In addition, we considered performance using several different data types for training and testing including single grayscale and color frames, and multiple grayscale frames stacked together in sequence. With the exception of a fully-connected network, all models performed reasonably well (around or above 80%) and most very well (∼95%) on at least one input type but with considerable variation across models and inputs. Overall, AlexNet, operating on single color frames as input, achieved the best level of performance (100% success rate in phase one and 55% in phase two) while VGG-16 performed well most consistently across image types. Performance with obstacles on the track and conditions that were different than those in training was much more variable than without objects and under conditions similar to those in the training set. Analysis of the model’s driving paths found greater consistency within vs. between models. Path similarity between models did not correlate strongly with success similarity. Our novel pixelflipping method allowed us to create a heatmap for each given image to observe what features of the image were weighted most heavily by the network when making its decision. Finally, we found that the variability across models in the driving task was not fully predicted by validation performance, indicating the presence of a ‘deployment gap’ between model training and performance in a simple, real-world task. Overall, these results demonstrate the need for increased field research in self-driving. 1Center for Complex Systems and Brain Sciences, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA 2College of Computer and Information Science, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA 3Department of Ocean and Mechanical Engineering, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA † mteti@fau.edu",
"title": ""
},
{
"docid": "3ddc5aa431464a0dc5e1eb24ed048789",
"text": "The idea that at least some aspects of word meaning can be induced from patterns of word co-occurrence is becoming increasingly popular. However, there is less agreement about the precise computations involved, and the appropriate tests to distinguish between the various possibilities. It is important that the effect of the relevant design choices and parameter values are understood if psychological models using these methods are to be reliably evaluated and compared. In this article, we present a systematic exploration of the principal computational possibilities for formulating and validating representations of word meanings from word co-occurrence statistics. We find that, once we have identified the best procedures, a very simple approach is surprisingly successful and robust over a range of psychologically relevant evaluation measures.",
"title": ""
},
{
"docid": "e11b6fd2dcec42e7b726363a869a0d95",
"text": "Future frame prediction in videos is a promising avenue for unsupervised video representation learning. Video frames are naturally generated by the inherent pixel flows from preceding frames based on the appearance and motion dynamics in the video. However, existing methods focus on directly hallucinating pixel values, resulting in blurry predictions. In this paper, we develop a dual motion Generative Adversarial Net (GAN) architecture, which learns to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a duallearning mechanism. The primal future-frame prediction and dual future-flow prediction form a closed loop, generating informative feedback signals to each other for better video prediction. To make both synthesized future frames and flows indistinguishable from reality, a dual adversarial training method is proposed to ensure that the futureflow prediction is able to help infer realistic future-frames, while the future-frame prediction in turn leads to realistic optical flows. Our dual motion GAN also handles natural motion uncertainty in different pixel locations with a new probabilistic motion encoder, which is based on variational autoencoders. Extensive experiments demonstrate that the proposed dual motion GAN significantly outperforms stateof-the-art approaches on synthesizing new video frames and predicting future flows. Our model generalizes well across diverse visual scenes and shows superiority in unsupervised video representation learning.",
"title": ""
},
{
"docid": "9cd00d9975c1efa741d1b01200a7d660",
"text": "BACKGROUND\nMany ethical problems exist in nursing homes. These include, for example, decision-making in end-of-life care, use of restraints and a lack of resources.\n\n\nAIMS\nThe aim of the present study was to investigate nursing home staffs' opinions and experiences with ethical challenges and to find out which types of ethical challenges and dilemmas occur and are being discussed in nursing homes.\n\n\nMETHODS\nThe study used a two-tiered approach, using a questionnaire on ethical challenges and systematic ethics work, given to all employees of a Norwegian nursing home including nonmedical personnel, and a registration of systematic ethics discussions from an Austrian model of good clinical practice.\n\n\nRESULTS\nNinety-one per cent of the nursing home staff described ethical problems as a burden. Ninety per cent experienced ethical problems in their daily work. The top three ethical challenges reported by the nursing home staff were as follows: lack of resources (79%), end-of-life issues (39%) and coercion (33%). To improve systematic ethics work, most employees suggested ethics education (86%) and time for ethics discussion (82%). Of 33 documented ethics meetings from Austria during a 1-year period, 29 were prospective resident ethics meetings where decisions for a resident had to be made. Agreement about a solution was reached in all 29 cases, and this consensus was put into practice in all cases. Residents did not participate in the meetings, while relatives participated in a majority of case discussions. In many cases, the main topic was end-of-life care and life-prolonging treatment.\n\n\nCONCLUSIONS\nLack of resources, end-of-life issues and coercion were ethical challenges most often reported by nursing home staff. The staff would appreciate systematic ethics work to aid decision-making. Resident ethics meetings can help to reach consensus in decision-making for nursing home patients. In the future, residents' participation should be encouraged whenever possible.",
"title": ""
},
{
"docid": "88ab27740e5c957993fd70f0bf6ac841",
"text": "We examine the problem of discrete stock price prediction using a synthesis of linguistic, financial and statistical techniques to create the Arizona Financial Text System (AZFinText). The research within this paper seeks to contribute to the AZFinText system by comparing AZFinText’s predictions against existing quantitative funds and human stock pricing experts. We approach this line of research using textual representation and statistical machine learning methods on financial news articles partitioned by similar industry and sector groupings. Through our research, we discovered that stocks partitioned by Sectors were most predictable in measures of Closeness, Mean Squared Error (MSE) score of 0.1954, predicted Directional Accuracy of 71.18% and a Simulated Trading return of 8.50% (compared to 5.62% for the S&P 500 index). In direct comparisons to existing market experts and quantitative mutual funds, our system’s trading return of 8.50% outperformed well-known trading experts. Our system also performed well against the top 10 quantitative mutual funds of 2005, where our system would have placed fifth. When comparing AZFinText against only those quantitative funds that monitor the same securities, AZFinText had a 2% higher return than the best performing quant fund.",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "4c9d9fce9f5c0d811e33470150da8b4f",
"text": "Personal health monitoring systems are emerging as promising solutions to develop ultra-small, portable devices that can continuously monitor and process several vital body parameters. In this work, we present a wearable device for physical and emotional health monitoring. The device obtains user's key physiological signals: ECG, respiration, Impedance Cardiogram (ICG), blood pressure and skin conductance and derives the user's emotion states as well. We have developed embedded algorithms that process the bio-signals in real-time to detect any abnormalities (cardiac arrhythmias and morphology changes) in the ECG and to detect key parameters (such as the Pre- Ejection Period and fluid status level) from the ICG. We present a novel method to detect continuous beat-by-beat blood pressure from the ECG and ICG signals, as well as a real-time embedded emotion classifier that computes the emotion levels of the user. Emotions are classified according to their attractiveness (positive valence) or their averseness (negative valence) in the horizontal valence dimension. The excitement level induced by the emotions is represented by high to low positions in the vertical arousal dimension of the valence-arousal space. The signals are measured either intermittently by touching the metal electrodes on the device (for point-of-care testing) or continuously, using a chest strap for long term monitoring. The processed data from device is sent to a mobile phone using a Bluetooth Low Energy protocol. Our results show that the device can monitor the signals continuously, providing accurate detection of the motion state, for over 72 hours on a single battery charge.",
"title": ""
},
{
"docid": "6b499350692f9cb6955bb43e5edcdc32",
"text": "Past research has identified factors that are important to the successful implementation of enterprise resource planning (ERP) systems. However, the identification of these factors has often been based on the perceptions of senior members within organizations that are implementing these systems. In this study, the perceptions of managers and end-users on selected implementation factors are compared. Understanding if differences exist in the perceptions of different groups within an organization and the nature of these differences can help implementers develop appropriate intervention mechanisms such as training and communication that can lead to successful ERP implementation.",
"title": ""
},
{
"docid": "6c61d7656175193200f4c7b749f15b63",
"text": "Discussions over the law and regulation of Artificial Intelligence (“AI”) and robots are all the rage as early applications are introduced in society. In computer science, concerns that “overly rigid regulations might stifle innovation”, have fueled proposals to create regimes of selective immunity for research on intelligent machines. At the same time, ethical arguments have prompted calls for an all-out ban on research in relation to lethal automated weapons (“LAWs”). And some writers claim that robots will become so important to mankind that “a new branch of the law” is needed, “to grant their race and its individual members the benefits of legal protection”, much like the international community did, or tried to, with the environment.",
"title": ""
}
] |
scidocsrr
|
6298f33bcdada26696cc11b00d2cda42
|
PhD Grants from the China Scholarship Council – Details of the PhD proposal 1 Supervision
|
[
{
"docid": "c5c64d7fcd9b4804f7533978026dcfbd",
"text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.",
"title": ""
}
] |
[
{
"docid": "cfcc9bc876bbd9ef1c45c3cbf06fa8c6",
"text": "Developing state-of-the-art approaches for specific tasks is a major driving force in our research community. Depending on the prestige of the task, publishing it can come along with a lot of visibility. The question arises how reliable are our evaluation methodologies to compare approaches? One common methodology to identify the stateof-the-art is to partition data into a train, a development and a test set. Researchers can train and tune their approach on some part of the dataset and then select the model that worked best on the development set for a final evaluation on unseen test data. Test scores from different approaches are compared, and performance differences are tested for statistical significance. In this publication, we show that there is a high risk that a statistical significance in this type of evaluation is not due to a superior learning approach. Instead, there is a high risk that the difference is due to chance. For example for the CoNLL 2003 NER dataset we observed in up to 26% of the cases type I errors (false positives) with a threshold of p < 0.05, i.e., falsely concluding a statistically significant difference between two identical approaches. We prove that this evaluation setup is unsuitable to compare learning approaches. We formalize alternative evaluation setups based on score distributions.",
"title": ""
},
{
"docid": "6b280ca761a5ed7b206ea8d487034b70",
"text": "The recent advances and the convergence of micro electro-mechanical systems technology, integrated circuit technologies, microprocessor hardware and nano technology, wireless communications, Ad-hoc networking routing protocols, distributed signal processing, and embedded systems have made the concept of Wireless Sensor Networks (WSNs). Sensor network nodes are limited with respect to energy supply, restricted computational capacity and communication bandwidth. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. To prolong the lifetime of the sensor nodes, designing efficient routing protocols is critical. Even though sensor networks are primarily designed for monitoring and reporting events, since they are application dependent, a single routing protocol cannot be efficient for sensor networks across all applications. In this paper, we analyze the design issues of sensor networks and present a classification and comparison of routing protocols. This comparison reveals the important features that need to be taken into consideration while designing and evaluating new routing protocols for sensor networks.",
"title": ""
},
{
"docid": "586f93e20e9d66029e9511780249544a",
"text": "Our goal is to synthesize controllers for robots that provably generalize well to novel environments given a dataset of example environments. The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between robustness of controllers to novel environments and generalization of hypotheses in supervised learning. In particular, we utilize the Probably Approximately Correct (PAC)-Bayes framework, which allows us to obtain upper bounds (that hold with high probability) on the expected cost of (stochastic) controllers across novel environments. We propose control synthesis algorithms that explicitly seek to minimize this upper bound. The corresponding optimization problem can be solved efficiently using convex optimization (Relative Entropy Programming in particular) in the setting where we are optimizing over a finite control policy space. In the more general setting of continuously parameterized controllers, we minimize this upper bound using stochastic gradient descent. We present examples of our approach in the context of obstacle avoidance control with depth measurements. Our simulated examples demonstrate the potential of our approach to provide strong generalization guarantees on controllers for robotic systems with continuous state and action spaces, nonlinear dynamics, and partially observable state via sensor measurements.",
"title": ""
},
{
"docid": "5454fbb1a924f3360a338c11a88bea89",
"text": "PURPOSE OF REVIEW\nThis review describes the most common motor neuron disease, ALS. It discusses the diagnosis and evaluation of ALS and the current understanding of its pathophysiology, including new genetic underpinnings of the disease. This article also covers other motor neuron diseases, reviews how to distinguish them from ALS, and discusses their pathophysiology.\n\n\nRECENT FINDINGS\nIn this article, the spectrum of cognitive involvement in ALS, new concepts about protein synthesis pathology in the etiology of ALS, and new genetic associations will be covered. This concept has changed over the past 3 to 4 years with the discovery of new genes and genetic processes that may trigger the disease. As of 2014, two-thirds of familial ALS and 10% of sporadic ALS can be explained by genetics. TAR DNA binding protein 43 kDa (TDP-43), for instance, has been shown to cause frontotemporal dementia as well as some cases of familial ALS, and is associated with frontotemporal dysfunction in ALS.\n\n\nSUMMARY\nThe anterior horn cells control all voluntary movement: motor activity, respiratory, speech, and swallowing functions are dependent upon signals from the anterior horn cells. Diseases that damage the anterior horn cells, therefore, have a profound impact. Symptoms of anterior horn cell loss (weakness, falling, choking) lead patients to seek medical attention. Neurologists are the most likely practitioners to recognize and diagnose damage or loss of anterior horn cells. ALS, the prototypical motor neuron disease, demonstrates the impact of this class of disorders. ALS and other motor neuron diseases can represent diagnostic challenges. Neurologists are often called upon to serve as a \"medical home\" for these patients: coordinating care, arranging for durable medical equipment, and leading discussions about end-of-life care with patients and caregivers. It is important for neurologists to be able to identify motor neuron diseases and to evaluate and treat patients affected by them.",
"title": ""
},
{
"docid": "9d979b8cf09dd54b28e314e2846f02a6",
"text": "Purpose – The objective of this paper is to analyse whether individuals’ socioeconomic characteristics – age, gender and income – influence their online shopping behaviour. The individuals analysed are experienced e-shoppers i.e. individuals who often make purchases on the internet. Design/methodology/approach – The technology acceptance model was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behaviour of e-shoppers are based on their own experiences. The information obtained has been tested using causal and multi-sample analyses. Findings – The results show that socioeconomic variables moderate neither the influence of previous use of the internet nor the perceptions of e-commerce; in short, they do not condition the behaviour of the experienced e-shopper. Practical implications – The results obtained help to determine that once individuals attain the status of experienced e-shoppers their behaviour is similar, independently of their socioeconomic characteristics. The internet has become a marketplace suitable for all ages and incomes and both genders, and thus the prejudices linked to the advisability of selling certain products should be revised. Originality/value – Previous research related to the socioeconomic variables affecting e-commerce has been aimed at forecasting who is likely to make an initial online purchase. In contrast to the majority of existing studies, it is considered that the current development of the online environment should lead to analysis of a new kind of e-shopper (experienced purchaser), whose behaviour differs from that studied at the outset of this research field. The experience acquired with online shopping nullifies the importance of socioeconomic characteristics.",
"title": ""
},
{
"docid": "d4858f410b4cd045b013c056b83576ea",
"text": "We previously proposed a new bioinstrumentation using the shape deformation of the amputated upper limbs without using the myoelectricity generated on the skin of the upper limbs. However many electronic parts were required owing to a bridge circuit and multi-amplifier circuits so as to amplify a tiny voltage of strain gages. Moreover, the surplus heat might occur by the overcurrent owing to low resistance value of strain gages. Therefore, in this study, we apply a flex sensor to this system instead of strain gages to solve the above problems.",
"title": ""
},
{
"docid": "7808250942708e5458133d4295beeddf",
"text": "Board level solder joint reliability performance during drop test is a critical concern to semiconductor and electronic product manufacturers. A new JEDEC standard for board level drop test of handheld electronic products was just released to specify the drop test procedure and conditions. However, there is no detailed information stated on dynamic responses of printed circuit board (PCB) and solder joints which are closely related to stress and strain of solder joints that affect the solder joint reliability, nor there is any simulation technique which provides good correlation with experimental measurements of dynamic responses of PCB and the resulting solder joint reliability during the entire drop impact process. In this paper, comprehensive dynamic responses of PCB and solder joints, e.g., acceleration, strains, and resistance, are measured and analyzed with a multichannel real-time electrical monitoring system, and simulated with a novel input acceleration (Input-G) method. The solder joint failure process, i.e., crack initiation, propagation, and opening, is well understood from the behavior of dynamic resistance. It is found experimentally and numerically that the mechanical shock causes multiple PCB bending or vibration which induces the solder joint fatigue failure. It is proven that the peeling stress of the critical solder joint is the dominant failure indicator by simulation, which correlates well with the observations and assumptions by experiment. Coincidence of cyclic change among dynamic resistance of solder joints, dynamic strains of PCB, and the peeling stress of the critical solder joints indicates that the solder joint crack opens and closes when the PCB bends down and up, and the critical solder joint failure is induced by cyclic peeling stress. The failure mode and location of critical solder balls predicted by modeling correlate well with experimental observation by cross section and dye penetration tests",
"title": ""
},
{
"docid": "b401c0a7209d98aea517cf0e28101689",
"text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"title": ""
},
{
"docid": "a2a7fa6e46fae0538a65c95dc533fe7e",
"text": "Many Internet Service Providers (ISPs), anti-virus companies, and enterprise email vendors use Domain Name System-based Blackhole Lists (DNSBLs) to keep track of IP addresses that originate spam, so that future emails sent from these IP addresses can be rejected out-of-hand. DNSBL operators populate blocking lists based on complaints from recipients of spam, who report the IP address of the relay from which the unwanted email was sent. To be effective in blocking spam, information in the blacklist must have the following properties:",
"title": ""
},
{
"docid": "2fe2f83fa9a0dca9f01fd9e5e80ca515",
"text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.",
"title": ""
},
{
"docid": "1a962bcbd5b670e532d841a74c2fe724",
"text": "In SCADA systems, there are many RTUs (Remote Terminal Units) are used for field data collection as well as sending data to master node through the communication system. In such case master node represents the collected data and enables manager to handle the remote controlling activities. The RTU is nothing but the unit of data acquisition in standalone manner. The processor used in RTU is vulnerable to random faults due to harsh environment around RTUs. Faults may lead to the failure of RTU unit and hence it becomes inaccessible for information acquisition. For long running methods, fault tolerance is major concern and research problem since from last two decades. Using the SCADA systems increase the problem of fault tolerance is becoming servered. To handle the faults in oreder to perform the message passing through all the layers of communication system fo the SCADA that time need the efficient fault tolerance. The faults like RTU, message passing layer faults in communication system etc. SCADA is nothing but one of application of MPI. The several techniques for the fault tolerance has been described for MPI which are utilized in different applications such as SCADA. The goal of this paper is to present the study over the different fault tolerance techniques which can be used to optimize the SCADA system availability by mitigating the faults in RTU devices and communication systems.",
"title": ""
},
{
"docid": "45009303764570cbfa3532a9d98f5393",
"text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.",
"title": ""
},
{
"docid": "2fc645ec4f9fe757be65f3f02b803b50",
"text": "Multicast communication plays a crucial role in Mobile Adhoc Networks (MANETs). MANETs provide low cost, self configuring devices for multimedia data communication in military battlefield scenarios, disaster and public safety networks (PSN). Multicast communication improves the network performance in terms of bandwidth consumption, battery power and routing overhead as compared to unicast for same volume of data communication. In recent past, a number of multicast routing protocols (MRPs) have been proposed that tried to resolve issues and challenges in MRP. Multicast based group communication demands dynamic construction of efficient and reliable route for multimedia data communication during high node mobility, contention, routing and channel overhead. This paper gives an insight into the merits and demerits of the currently known research techniques and provides a better environment to make reliable MRP. It presents a ample study of various Quality of Service (QoS) techniques and existing enhancement in mesh based MRPs. Mesh topology based MRPs are classified according to their enhancement in routing mechanism and QoS modification on On-Demand Multicast Routing Protocol (ODMRP) protocol to improve performance metrics. This paper covers the most recent, robust and reliable QoS and Mesh based MRPs, classified based on their operational features, with their advantages and limitations, and provides comparison of their performance parameters.",
"title": ""
},
{
"docid": "9ff6983a6b0019de684c8c3131a7a035",
"text": "Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl (1997, 1998). In addition, the complexity of the decision procedures is characterized for all the languages and classes of models considered.",
"title": ""
},
{
"docid": "c57d9c4f62606e8fccef34ddd22edaec",
"text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.",
"title": ""
},
{
"docid": "91f718a69532c4193d5e06bf1ea19fd3",
"text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.",
"title": ""
},
{
"docid": "5525d06ad673a0099d05568f201e1195",
"text": "Bitcoin is a digital payment system empowered by a distributed database called blockchain. The blockchain is an open ledger containing records of every transaction within Bitcoin system maintained by Bitcoin nodes all over the world. Because of its availability and robustness, the blockchain could be utilized as a record-keeping tool for information not related to any Bitcoin transactions. We propose a method of utilizing the blockchain of Bitcoin system to publish information by embedding an arbitrary size of data into Bitcoin transactions. By publishing information using the blockchain, the information also carries the characteristics of Bitcoin transaction: anonymous, decentralized, and permanent. The proposed protocol could be used to extend the functionality of asset management systems which are limited to a maximum of 80 bytes data. The proposed method also offers an efficiency of the transaction fee by 18 percent compared to Bitcoin Messaging protocol.",
"title": ""
},
{
"docid": "b2db53f203f2b168ec99bd8e544ff533",
"text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.",
"title": ""
},
{
"docid": "5f1964f977cd094efab1b77032c7463d",
"text": "A circularly polarized (CP) spiral antenna array for wideband application is presented, which employs a simple single-arm spiral antenna(SASA), as the radiating element. By introducing a small round disc backed near the antenna center and loading a termination resistor, the SASA achieves wide impedance bandwidth without balun circuit and good axial ratio (AR) performance, respectively. A total of four array elements are used in the 2×2 configuration to obtain a higher gain. Particularly, a method of sequentially rotating the elements is used to improve the axial ratio. The array performs an impedance bandwidth for VSWR≤1.5 of 40% ranging from 1.25 to 1.88 GHz, an excellent AR of less than 0.74 dB, and a gain variation from 11.34 to 14.56 dB within the operating band. Measured and simulated results presented for the array confirm its wide impedance, axial ratio, and gain bandwidths. Acceptable agreement between the simulation and measured results validates the proposed design.",
"title": ""
},
{
"docid": "645a1ad9ab07eee096180e08e6f1fdff",
"text": "In the light of evidence from about 200 studies showing gender symmetry in perpetration of partner assault, research can now focus on why gender symmetry is predominant and on the implications of symmetry for primary prevention and treatment of partner violence. Progress in such research is handicapped by a number of problems: (1) Insufficient empirical research and a surplus of discussion and theory, (2) Blinders imposed by commitment to a single causal factor theory-patriarchy and male dominance-in the face of overwhelming evidence that this is only one of a multitude of causes, (3) Research purporting to investigate gender differences but which obtains data on only one gender, (4) Denial of research grants to projects that do not assume most partner violence is by male perpetrators, (5) Failure to investigate primary prevention and treatment programs for female offenders, and (6) Suppression of evidence on female perpetration by both researchers and agencies.",
"title": ""
}
] |
scidocsrr
|
bf553afdce8dca3627eba34b85d6638b
|
Identification of High-Level Concept Clones in Source Code
|
[
{
"docid": "69c8c07b1784d106af6230f737f5b607",
"text": "Legacy systems pose problems to muintainers that can be solved partially with effective tools. A prototype tool for determining collections offiles sharing a large amount of text has been developed and applied to a 40 megabyte source tree containing two releases of the gcc compiler. Similarities in source code and documentation corresponding to software cloning, movement and inertia between releases, as well as the effects of preprocessing easily stand out in a way that immediately conveys nonobvious structural information to a maintainer taking responsibility for such a system.",
"title": ""
},
{
"docid": "94aec5d04cad227660fcbe680e6edbf4",
"text": "The paper focuses on investigating the combined use of semantic and structural information of programs to support the comprehension tasks involved in the maintenance and reengineering of software systems. Here, semantic refers to the domain specific issues (both problem and development domains) of a software system. The other dimension, structural, refers to issues such as the actual syntactic structure of the program along with the control and data flow that it represents. An advanced information retrieval method, latent semantic indexing, is used to define a semantic similarity measure between software components. Components within a software system are then clustered together using this similarity measure. Simple structural information (i.e., file organization) of the software system is then used to assess the semantic cohesion of the clusters and files, with respect to each other. The measures are formally defined for general application. A set of experiments is presented which demonstrates how these measures can assist in the understanding of a nontrivial software system, namely a version of NCSA Mosaic.",
"title": ""
}
] |
[
{
"docid": "36a538b833de4415d12cd3aa5103cf9b",
"text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.",
"title": ""
},
{
"docid": "a18ef88938a0d391874a8be61c27694a",
"text": "A growing body of literature has emerged that focuses upon cognitive assessment of video game player experience. Given the growing popularity of video gaming and the increasing literature on cognitive aspects of video gamers, there is a growing need for novel approaches to assessment of the cognitive processes that occur while persons are immersed in video games. In this study, we assessed various stimulus modalities and gaming events using an off-the-shelf EEG devise. A significant difference was found among different stimulus modalities with increasingly difficult cognitive demands. Specifically, beta and gamma power were significantly increased during high intensity events when compared to low intensity gaming events. Our findings suggest that the Emotiv EEG can be used to differentiate between varying stimulus modalities and accompanying cognitive processes. 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3ea0e0ee7061184ebc81f79695ac717b",
"text": "In OMS patients [Figure 1b], the most important pathology change is the loosen of IT tendon sheath.[3] After that, the OH becomes short and fibrosis because of the disuse atrophy. When the patient swallows, the OH cannot be extended, the IT moved laterally and superiorly. The posterior clavicle margin of OH replace IT as a new origin of force, When the patient swallow, the shorten OH like a string, form an X‐shaped tent to elevate the SCM in the lateral neck during upward movement of the hyoid bone. The elevated SCM formed the mass in the neck.",
"title": ""
},
{
"docid": "77278e6ba57e82c88f66bd9155b43a50",
"text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.",
"title": ""
},
{
"docid": "6b0b3984822078c0be64858860bfb6a9",
"text": "In this paper we briefly review the research and flight test activities performed to develop and integrate the Laser Obstacle Avoidance and Monitoring (LOAM) system on helicopter platforms and focus on the recent research advances towards the development of a new scaled LOAM variant for small-to-medium size Unmanned Aircraft (UA) platforms. After a brief description of the system architecture and sensor characteristics, emphasis is given to the performance models and data processing algorithms developed for obstacle detection, classification and calculation of alternative flight paths, as well as to the flight test activities performed on various military platforms. A concluding section provides an overview of current LOAM research developments with a focus on non-cooperative UA Sense-and-Avoid (SAA) applications.",
"title": ""
},
{
"docid": "aee91ee5d4cbf51d9ce1344be4e5448c",
"text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.",
"title": ""
},
{
"docid": "43af3570e8eeee6cf113991e6c0994cf",
"text": "The main goal of modeling human conversation is to create agents which can interact with people in both open-ended and goal-oriented scenarios. End-to-end trained neural dialog systems are an important line of research for such generalized dialog models as they do not resort to any situation-specific handcrafting of rules. However, incorporating personalization into such systems is a largely unexplored topic as there are no existing corpora to facilitate such work. In this paper, we present a new dataset of goal-oriented dialogs which are influenced by speaker profiles attached to them. We analyze the shortcomings of an existing end-toend dialog system based on Memory Networks and propose modifications to the architecture which enable personalization. We also investigate personalization in dialog as a multi-task learning problem, and show that a single model which shares features among various profiles outperforms separate models for each profile.",
"title": ""
},
{
"docid": "ba6b016ace0c098ab345cd5a01af470d",
"text": "This paper describes a vehicle detection system fusing radar and vision data. Radar data are used to locate areas of interest on images. Vehicle search in these areas is mainly based on vertical symmetry. All the vehicles found in different image areas are mixed together, and a series of filters is applied in order to delete false detections. In order to speed up and improve system performance, guard rail detection and a method to manage overlapping areas are also included. Both methods are explained and justified in this paper. The current algorithm analyzes images on a frame-by-frame basis without any temporal correlation. Two different statistics, namely: 1) frame based and 2) event based, are computed to evaluate vehicle detection efficiency, while guard rail detection efficiency is computed in terms of time savings and correct detection rates. Results and problems are discussed, and directions for future enhancements are provided",
"title": ""
},
{
"docid": "1f1158ad55dc8a494d9350c5a5aab2f2",
"text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).",
"title": ""
},
{
"docid": "d961bd734577dad36588f883e56c3a5d",
"text": "Received Jan 5, 2018 Revised Feb 14, 2018 Accepted Feb 28, 2018 This paper proposes Makespan and Reliability based approach, a static sheduling strategy for distributed real time embedded systems that aims to optimize the Makespan and the reliability of an application. This scheduling problem is NP-hard and we rely on a heuristic algorithm to obtain efficiently approximate solutions. Two contributions have to be outlined: First, a hierarchical cooperation between heuristics ensuring to treat alternatively the objectives and second, an Adapatation Module allowing to improve solution exploration by extending the search space. It results a set of compromising solutions offering the designer the possibility to make choices in line with his (her) needs. The method was tested and experimental results are provided.",
"title": ""
},
{
"docid": "2c19e34ba53e7eb8631d979c83ee3e55",
"text": "This paper is the first attempt to learn the policy of an inquiry dialog system (IDS) by using deep reinforcement learning (DRL). Most IDS frameworks represent dialog states and dialog acts with logical formulae. In order to make learning inquiry dialog policies more effective, we introduce a logical formula embedding framework based on a recursive neural network. The results of experiments to evaluate the effect of 1) the DRL and 2) the logical formula embedding framework show that the combination of the two are as effective or even better than existing rule-based methods for inquiry dialog policies.",
"title": ""
},
{
"docid": "da7b01d888bde1984088f190e08af77e",
"text": "One of the most frequently cited sarcasm realizations is the use of positive sentiment within negative context. We propose a novel approach towards modeling a sentiment context of a document via the sequence of sentiment labels assigned to its sentences. We demonstrate that the sentiment flow shifts (from negative to positive and from positive to negative) can be used as reliable classification features for the task of sarcasm detection. Our classifier achieves the F1-measure of 0.7 for all reviews, going up to 0.9 for the reviews with high star ratings (positive reviews), which are the reviews that are materially affected by the presence of sarcasm in the text. Introduction Verbal irony or sarcasm has been studied by psychologists, linguists, and computer scientists for different types of text: speech, fiction, Twitter messages, Internet dialog, product reviews, etc. Sentiment is widely used as a classification feature for the detection of whether a text snippet or a document is sarcastic or not. The popularity of this feature can be explained by the fact that it is agreed that in many cases sarcasm is manifested in a document via a text snippet with positive sentiment applied to a negative situation. Given that the notion of sarcasm (or verbal irony, or irony for that matter) does not have a formal definition except that in the case of sarcasm/irony a nonsalient interpretation has the priority over a salient one, positive utterance within a negative context is a reliable feature to use (Riloff et al. 2013). Other features (textual and non-textual) used for the task of identifying sarcastic text are: emoticons (GonzalezIbáñez, Muresan, and Wacholder 2011), heavy punctuation (Carvalho et al. 2009), hashtags (Wang et al. 2015), quotation marks (Carvalho et al. 2009), positive interjections (Gonzalez-Ibáñez, Muresan, and Wacholder 2011), lexical N-gram cues associated with sarcasm (Davidov, Tsur, and Rappoport 2010), lists of positive and negative words (Gonzalez-Ibáñez, Muresan, and Wacholder 2011), etc. It must be noted that the above features are designed to predict sarcasm in short messages. In this work we demonstrate that these features do not work well for long documents. This means that other features should be devised for detecting sarcasm on a document level. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Recently the necessity of looking beyond the text snippets and into the context that surrounds the possibly sarcastic text utterance got a lot of attention. Researchers investigate the effect of context on sarcasm and design features to capture the global context within which sarcasm appears. Wallace et al. (2015) work on comments from Reddit threads about politics. Wang et al. (2015) work with Twitter messages and analyze these messages as a part of a larger Twitter thread. In both cases, the context is derived using lexical and nonlexical features of the surrounding messages and the information about the overall polarity of the thread (e.g., whether the Reddit thread is a part of the conversation among conservatives or not). The generated context has a certain sentiment that is used for the task of sarcasm detection. In our work we rely on the importance of context for sarcasm detection. Our approach to contexualization is based on the common belief that a sarcastic document contains a passage which, when taken out of context and analyzed as a stand-alone sentence with the priority of the salient meaning over non-salient one, can be classified as positive but within a given (typically negative) context becomes the holder of sarcasm. For example, the following sentence marked with a positive sentiment label1 while being a part of an overall negative (1 ) review of a Bill Clinton biography documentary signals the presence of sarcasm in the review2. This dvd is great if you think that Gennifer Flowers, Paula Jones and Monica Lewinsky were the highlights of the Clinton administration. However, sarcasm can be observed in overall positive (5 ) reviews as well. For example, in a positive (5 ) review about a movie, the following sentence marked as negative is a good signal of sarcasm being present in the review. I believe this film was secretly banned from Oscar consideration due to the fact the committee felt it would be unfair to the other nominees. All sentiment labels presented in this paper are obtained using the Stanford Sentiment Analysis tool (Socher et al. 2013) with the 5-point sentiment scale: very negative (-2), negative (-1), neutral (0), positive (+1), very positive (+2). The Stanford Sentiment Analysis tool sentence sentiment prediction accuracy is 85.4% All examples presented in this paper are from existing Amazon product reviews. We preserve the original orthography, punctuation, and capitalization Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference",
"title": ""
},
{
"docid": "a3db8f51d9dfa6608677d63492d2fb6f",
"text": "In this article, we introduce nonlinear versions of the popular structure tensor, also known as second moment matrix. These nonlinear structure tensors replace the Gaussian smoothing of the classical structure tensor by discontinuity-preserving nonlinear diffusions. While nonlinear diffusion is a well-established tool for scalar and vector-valued data, it has not often been used for tensor images so far. Two types of nonlinear diffusion processes for tensor data are studied: an isotropic one with a scalar-valued diffusivity, and its anisotropic counterpart with a diffusion tensor. We prove that these schemes preserve the positive semidefiniteness of a matrix field and are, therefore, appropriate for smoothing structure tensor fields. The use of diffusivity functions of total variation (TV) type allows us to construct nonlinear structure tensors without specifying additional parameters compared to the conventional structure tensor. The performance of nonlinear structure tensors is demonstrated in three fields where the classic structure tensor is frequently used: orientation estimation, optic flow computation, and corner detection. In all these cases, the nonlinear structure tensors demonstrate their superiority over the classical linear one. Our experiments also show that for corner detection based on nonlinear structure tensors, anisotropic nonlinear tensors give the most precise localisation. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "335a551d08afd6af7d90b35b2df2ecc4",
"text": "The interpretation of colonic biopsies related to inflammatory conditions can be challenging because the colorectal mucosa has a limited repertoire of morphologic responses to various injurious agents. Only few processes have specific diagnostic features, and many of the various histological patterns reflect severity and duration of the disease. Importantly the correlation with endoscopic and clinical information is often cardinal to arrive at a specific diagnosis in many cases.",
"title": ""
},
{
"docid": "e1404d2926f51455690883caf01fb2f9",
"text": "The integration of data produced and collected across autonomous, heterogeneous web services is an increasingly important and challenging problem. Due to the lack of global identifiers, the same entity (e.g., a product) might have different textual representations across databases. Textual data is also often noisy because of transcription errors, incomplete information, and lack of standard formats. A fundamental task during data integration is matching of strings that refer to the same entity. In this paper, we adopt the widely used and established cosine similarity metric from the information retrieval field in order to identify potential string matches across web sources. We then use this similarity metric to characterize this key aspect of data integration as a join between relations on textual attributes, where the similarity of matches exceeds a specified threshold. Computing an exact answer to the text join can be expensive. For query processing efficiency, we propose a sampling-based join approximation strategy for execution in a standard, unmodified relational database management system (RDBMS), since more and more web sites are powered by RDBMSs with a web-based front end. We implement the join inside an RDBMS, using SQL queries, for scalability and robustness reasons. Finally, we present a detailed performance evaluation of an implementation of our algorithm within a commercial RDBMS, using real-life data sets. Our experimental results demonstrate the efficiency and accuracy of our techniques.",
"title": ""
},
{
"docid": "a47d001dc8305885e42a44171c9a94b2",
"text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.",
"title": ""
},
{
"docid": "0a1bc682d4c2d2c57605702d44160a20",
"text": "This paper introduces an open architecture humanoid robotics platform (OpenHRP for short) on which various building blocks of humanoid robotics can be investigated. OpenHRP is a virtual humanoid robot platform with a compatible humanoid robot, and consists of a simulator of humanoid robots and motion control library for them which can also be applied to a compatible humanoid robot as it is. OpenHRP also has a view simulator of humanoid robots on which humanoid robot vision can be studied. The consistency between the simulator and the robot are enhanced by introducing a new algorithm to simulate repulsive force and torque between contacting objects. OpenHRP is expected to initiate the exploration of humanoid robotics on an open architecture software and hardware, thanks to the unification of the controllers and the examined consistency between the simulator and a real humanoid robot.",
"title": ""
},
{
"docid": "a95b8848b18567db4d9e30e54042f8eb",
"text": "The action of many extracellular guidance cues on axon pathfinding requires Ca2+ influx at the growth cone (Hong et al., 2000; Nishiyama et al., 2003; Henley and Poo, 2004), but how activation of guidance cue receptors leads to opening of plasmalemmal ion channels remains largely unknown. Analogous to the chemotaxis of amoeboid cells (Parent et al., 1998; Servant et al., 2000), we found that a gradient of chemoattractant triggered rapid asymmetric PI(3,4,5)P3 accumulation at the growth cone's leading edge, as detected by the translocation of a GFP-tagged binding domain of Akt in Xenopus laevis spinal neurons. Growth cone chemoattraction required PI(3,4,5)P3 production and Akt activation, and genetic perturbation of polarized Akt activity disrupted axon pathfinding in vitro and in vivo. Furthermore, patch-clamp recording from growth cones revealed that exogenous PI(3,4,5)P3 rapidly activated TRP (transient receptor potential) channels, and asymmetrically applied PI(3,4,5)P3 was sufficient to induce chemoattractive growth cone turning in a manner that required downstream Ca2+ signaling. Thus, asymmetric PI(3,4,5)P3 elevation and Akt activation are early events in growth cone chemotaxis that link receptor activation to TRP channel opening and Ca2+ signaling. Altogether, our findings reveal that PI(3,4,5)P3 elevation polarizes to the growth cone's leading edge and can serve as an early regulator during chemotactic guidance.",
"title": ""
},
{
"docid": "78179425b45a0aa0eba67fba802e5c6c",
"text": "Internet Gaming Disorder (IGD) is a potential mental disorder currently included in the third section of the latest (fifth) edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-5) as a condition that requires additional research to be included in the main manual. Although research efforts in the area have increased, there is a continuing debate about the respective criteria to use as well as the status of the condition as mental health concern. Rather than using diagnostic criteria which are based on subjective symptom experience, the National Institute of Mental Health advocates the use of Research Domain Criteria (RDoC) which may support classifying mental disorders based on dimensions of observable behavior and neurobiological measures because mental disorders are viewed as biological disorders that involve brain circuits that implicate specific domains of cognition, emotion, and behavior. Consequently, IGD should be classified on its underlying neurobiology, as well as its subjective symptom experience. Therefore, the aim of this paper is to review the neurobiological correlates involved in IGD based on the current literature base. Altogether, 853 studies on the neurobiological correlates were identified on ProQuest (in the following scholarly databases: ProQuest Psychology Journals, PsycARTICLES, PsycINFO, Applied Social Sciences Index and Abstracts, and ERIC) and on MEDLINE, with the application of the exclusion criteria resulting in reviewing a total of 27 studies, using fMRI, rsfMRI, VBM, PET, and EEG methods. The results indicate there are significant neurobiological differences between healthy controls and individuals with IGD. The included studies suggest that compared to healthy controls, gaming addicts have poorer response-inhibition and emotion regulation, impaired prefrontal cortex (PFC) functioning and cognitive control, poorer working memory and decision-making capabilities, decreased visual and auditory functioning, and a deficiency in their neuronal reward system, similar to those found in individuals with substance-related addictions. This suggests both substance-related addictions and behavioral addictions share common predisposing factors and may be part of an addiction syndrome. Future research should focus on replicating the reported findings in different cultural contexts, in support of a neurobiological basis of classifying IGD and related disorders.",
"title": ""
},
{
"docid": "5e31d7ff393d69faa25cb6dea5917a0e",
"text": "In this paper we aim to formally explain the phenomenon of fast convergence of Stochastic Gradient Descent (SGD) observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for mini-batch SGD parallel to that for full gradient descent. We show that there is a critical batch size m∗ such that: (a) SGD iteration with mini-batch sizem ≤ m∗ is nearly equivalent to m iterations of mini-batch size 1 (linear scaling regime). (b) SGD iteration with mini-batch m > m∗ is nearly equivalent to a full gradient descent iteration (saturation regime). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying O(n) acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction. † See full version of this paper at arxiv.org/abs/1712.06559. Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA. Correspondence to: Siyuan Ma <masi@cse.ohio-state.edu>, Raef Bassily <bassily.1@osu.edu>, Mikhail Belkin <mbelkin@cse.ohio-state.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
}
] |
scidocsrr
|
aa6173ec0f9d80b35164e2ac3cff7c68
|
Fuzzing: State of the Art
|
[
{
"docid": "ab51b39647784a4788c705e2fb6b3a20",
"text": "We propose a light-weight, yet effective, technique for fuzz-testing security protocols. Our technique is modular, it exercises (stateful) protocol implementations in depth, and handles encrypted traffic. We use a concrete implementation of the protocol to generate valid inputs, and mutate the inputs using a set of fuzz operators. A dynamic memory analysis tool monitors the execution as an oracle to detect the vulnerabilities exposed by fuzz-testing. We provide the fuzzer with the necessary keys and cryptographic algorithms in order to properly mutate encrypted messages. We present a case study on two widely used, mature implementations of the Internet Key Exchange (IKE) protocol and report on two new vulnerabilities discovered by our fuzz-testing tool. We also compare the effectiveness of our technique to two existing model-based fuzz-testing tools for IKE.",
"title": ""
},
{
"docid": "2c7920f53eed99e3a7380ebc036e67a5",
"text": "We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers.",
"title": ""
}
] |
[
{
"docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d",
"text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.",
"title": ""
},
{
"docid": "eec1f1cdb7b4adfec71f5917b077661a",
"text": "Digital games have become a remarkable cultural phenomenon in the last ten years. The casual games sector especially has been growing rapidly in the last few years. However, there is no clear view on what is \"casual\" in games cultures and the area has not previously been rigorously studied. In the discussions on casual games, \"casual\" is often taken to refer to the player, the game or the playing style, but other factors such as business models and accessibility are also considered as characteristic of \"casual\" in games. Views on casual vary and confusion over different meanings can lead to paradoxical readings, which is especially the case when \"casual gamer\" is taken to mean both \"someone who plays casual games\" and someone who \"plays casually\". In this article we will analyse the ongoing discussion by providing clarification of the different meanings of casual and a framework for an overall understanding of casual in the level of expanded game experience.",
"title": ""
},
{
"docid": "5a27ac14c13ef7c7cf9d6fd1b535d03e",
"text": "Great database systems performance relies heavily on index tuning, i.e., creating and utilizing the best indices depending on the workload. However, the complexity of the index tuning process has dramatically increased in recent years due to ad-hoc workloads and shortage of time and system resources to invest in tuning.\n This paper introduces holistic indexing, a new approach to automated index tuning in dynamic environments. Holistic indexing requires zero set-up and tuning effort, relying on adaptive index creation as a side-effect of query processing. Indices are created incrementally and partially;they are continuously refined as we process more and more queries. Holistic indexing takes the state-of-the-art adaptive indexing ideas a big step further by introducing the notion of a system which never stops refining the index space, taking educated decisions about which index we should incrementally refine next based on continuous knowledge acquisition about the running workload and resource utilization. When the system detects idle CPU cycles, it utilizes those extra cycles by refining the adaptive indices which are most likely to bring a benefit for future queries. Such idle CPU cycles occur when the system cannot exploit all available cores up to 100%, i.e., either because the workload is not enough to saturate the CPUs or because the current tasks performed for query processing are not easy to parallelize to the point where all available CPU power is exploited.\n In this paper, we present the design of holistic indexing for column-oriented database architectures and we discuss a detailed analysis against parallel versions of state-of-the-art indexing and adaptive indexing approaches. Holistic indexing is implemented in an open-source column-store DBMS. Our detailed experiments on both synthetic and standard benchmarks (TPC-H) and workloads (SkyServer) demonstrate that holistic indexing brings significant performance gains by being able to continuously refine the physical design in parallel to query processing, exploiting any idle CPU resources.",
"title": ""
},
{
"docid": "5ebb65f075fd00130e6684b86b9ab235",
"text": "While machine learning systems have recently achieved impressive, (super)human-level performance in several tasks, they have often relied on unnatural amounts of supervision – e.g. large numbers of labeled images or continuous scores in video games. In contrast, human learning is largely unsupervised, driven by observation and interaction with the world. Emulating this type of learning in machines is an open challenge, and one that is critical for general artificial intelligence. Here, we explore prediction of future frames in video sequences as an unsupervised learning rule. A key insight here is that in order to be able to predict how the visual world will change over time, an agent must have at least some implicit model of object structure and the possible transformations objects can undergo. To this end, we have designed several models capable of accurate prediction in complex sequences. Our first model consists of a recurrent extension to the standard autoencoder framework. Trained end-to-end to predict the movement of synthetic stimuli, we find that the model learns a representation of the underlying latent parameters of the 3D objects themselves. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. In addition, we explore the use of an adversarial loss, as in a Generative Adversarial Network, illustrating its complementary effects to traditional pixel losses for the task of next-frame prediction.",
"title": ""
},
{
"docid": "61ce4f9ec7e72e88294ab0db4ad0b639",
"text": "Although sexist attitudes are generally thought to undermine support for employment equity (EE) policies supporting women, we argue that the effects of benevolent sexism are more complex. Across 4 studies, we extend the ambivalent sexism literature by examining both the positive and the negative effects benevolent sexism has for the support of gender-based EE policies. On the positive side, we show that individuals who endorse benevolent sexist attitudes on trait measures of sexism (Study 1) and individuals primed with benevolent sexist attitudes (Study 2) are more likely to support an EE policy, and that this effect is mediated by feelings of compassion. On the negative side, we find that this support extends only to EE policies that promote the hiring of women in feminine, and not in masculine, positions (Study 3 and 4). Thus, while benevolent sexism may appear to promote gender equality, it subtly undermines it by contributing to occupational gender segregation and leading to inaction in promoting women in positions in which they are underrepresented (i.e., masculine positions). (PsycINFO Database Record",
"title": ""
},
{
"docid": "fb6f4d00903dced8f850a1f9af1f2532",
"text": "A demonstration of a real-time full-duplex wireless link is presented, in which a pair of full-duplex transceivers perform simultaneous transmission and reception on the same frequency channel. A full-duplex transceiver is composed of a custom-designed small-form-factor analog self-interference canceller, and a digital self-interference cancellation implementation is integrated with the National Instruments Universal Software Radio Peripheral (USRP). An adaptive analog self-interference canceller tuning mechanism adjusts to environmental changes. We demonstrate the practicality and robustness of the full-duplex wireless link through the National Instruments LabVIEW interface.",
"title": ""
},
{
"docid": "783d7251658f9077e05a7b1b9bd60835",
"text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.",
"title": ""
},
{
"docid": "ffb03136c1f8d690be696f65f832ab11",
"text": "This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects. A new sparsity function is imposed on the extracted featuremap to capture the structure and shape of the learned object, extracting interpretable features to improve the prediction performance. The proposed algorithm is based on organizing the activation within and across featuremap by constraining the node activities through `2 and `1 normalization in a structured form.",
"title": ""
},
{
"docid": "d41b08be7a46e042dca4bdcd2bf1d24c",
"text": "This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25%.",
"title": ""
},
{
"docid": "f37f61322c6d4fc5c28c3c24adc949a4",
"text": "1 Bob Zmud was the accepting Senior Editor for this article. 2 Much research has been done on organizational forms, particularly hierarchies, communities, and markets. A classic article that describes how firms can be organized is Ouchi, W. G., “Markets, Bureaucracies, and Clans,” Administrative Science Quarterly (25), 1980, pp. 129-141. Three more recent articles that examine the use of markets, hierarchies, and communities that are internal to the firm are Adler, P. S., “Market, Hierarchy, and Trust: The Knowledge Economy and the Future of Capitalism,” Organization Science (12:2), 2001, pp. 214-234; Barney, J. B., “Firm Resources and Sustained Competitive Advantage,” Journal of Management (17), 1991, pp. 99-120; and Conner, K. R. and Prahalad, C. K., “A Resource-based Theory of the Firm: Knowledge Versus Opportunism,” Organization Science (7:5), 1996, pp. 477-501. 3 Note that we use the term “reuse” because our focus is the knowledge management strategy rather than the user per se. 4 Alavi, M. and Leidner, D. E., “Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues,” MIS Quarterly (25:1), 2001, pp. 107-136; Conner and Prahalad, op. cit.; Grant, R. M., “Prospering in Dynamically Competitive Environments: Organizational Capability as Knowledge Integration,” THREE KNOWLEDGE MANAGEMENT STRATEGIES: KNOWLEDGE HIERARCHIES, KNOWLEDGE MARKETS, AND KNOWLEDGE COMMUNITIES",
"title": ""
},
{
"docid": "7f14c41cc6ca21e90517961cf12c3c9a",
"text": "Probiotic microorganisms have been documented over the past two decades to play a role in cholesterol-lowering properties via various clinical trials. Several mechanisms have also been proposed and the ability of these microorganisms to deconjugate bile via production of bile salt hydrolase (BSH) has been widely associated with their cholesterol lowering potentials in prevention of hypercholesterolemia. Deconjugated bile salts are more hydrophobic than their conjugated counterparts, thus are less reabsorbed through the intestines resulting in higher excretion into the feces. Replacement of new bile salts from cholesterol as a precursor subsequently leads to decreased serum cholesterol levels. However, some controversies have risen attributed to the activities of deconjugated bile acids that repress the synthesis of bile acids from cholesterol. Deconjugated bile acids have higher binding affinity towards some orphan nuclear receptors namely the farsenoid X receptor (FXR), leading to a suppressed transcription of the enzyme cholesterol 7-alpha hydroxylase (7AH), which is responsible in bile acid synthesis from cholesterol. This notion was further corroborated by our current docking data, which indicated that deconjugated bile acids have higher propensities to bind with the FXR receptor as compared to conjugated bile acids. Bile acids-activated FXR also induces transcription of the IBABP gene, leading to enhanced recycling of bile acids from the intestine back to the liver, which subsequently reduces the need for new bile formation from cholesterol. Possible detrimental effects due to increased deconjugation of bile salts such as malabsorption of lipids, colon carcinogenesis, gallstones formation and altered gut microbial populations, which contribute to other varying gut diseases, were also included in this review. Our current findings and review substantiate the need to look beyond BSH deconjugation as a single factor/mechanism in strain selection for hypercholesterolemia, and/or as a sole mean to justify a cholesterol-lowering property of probiotic strains.",
"title": ""
},
{
"docid": "67c5afa7f61f65a9aad0b951fa153f8d",
"text": "Neural networkbased methods have been viewed as one of the major driving force in the recent development of natural language processing (NLP). We all have witnessed with great excitement how this subfield advances: new ideas emerge at an unprecedented speed and old ideas resurge in unexpected ways. In a nutshell, there are two major trends: ● Ideas and techniques from other fields of machine learning and artificial intelligence (A.I.) have increasing impact on neural networkbased NLP methods. ● With endtoend models taking on more complex tasks, the design of architecture and mechanisms often needs more domain knowledge from linguists and other domain experts. Both trends are important to researchers in the computational linguistics community. Fundamental ideas like external memory or reinforcement learning, although introduced to NLP only recently, have quickly lead to significant improvement on tasks like natural language generation and question answering. On the other hand, with complicated neural systems with many cooperating components, it calls for linguistic knowledge in designing the right mechanism, architecture, and sometimes training setting. As a simple example, the introducing of automatic alignment in neural machine translation, has quickly led to the stateoftheart performance in machine translation and triggered a large body of sequencetosequence models. It is therefore important to get the researchers in computational linguistics community acquainted with the recent progress in deep learning for NLP. We will focus on the work and ideas strongly related to the core of natural language and yet not so familiar to the majority of the community, which can be roughly categorized into: 1) the differentiable datastructures, and 2) the learning paradigms for NLP. Differentiable datastructures, starting with the memory equipped with continuous operations in Neural Turing Machine, have been the foundation of deep models with sophisticated operations. Some members of it, such as Memory Network, have become famous on tasks like question answering and machine translation, while other development in this direction, including those with clear and important application in NLP, are relatively new to this community. Deep learning, with its promise on endtoend learning, not only enables the training of complex NLP models from scratch, but also extends the training setting to include remote and indirect supervision. We will introduce not only the endtoend learning in its general notion, but also newly emerged",
"title": ""
},
{
"docid": "f70949a29dbac974a7559684d1da3ebb",
"text": "Cybercrime is all about the crimes in which communication channel and communication device has been used directly or indirectly as a medium whether it is a Laptop, Desktop, PDA, Mobile phones, Watches, Vehicles. The report titled “Global Risks for 2012”, predicts cyber-attacks as one of the top five risks in the World for Government and business sector. Cyber crime is a crime which is harder to detect and hardest to stop once occurred causing a long term negative impact on victims. With the increasing popularity of online banking, online shopping which requires sensitive personal and financial data, it is a term that we hear in the news with some frequency. Now, in order to protect ourselves from this crime we need to know what it is and how it does works against us. This paper presents a brief overview of all about cyber criminals and crime with its evolution, types, case study, preventive majors and the department working to combat this crime.",
"title": ""
},
{
"docid": "061dc618163d08972b73af42e8628159",
"text": "< draft-ietf-pkix-ipki-new-rfc2527-01.txt > Status of this Memo This document is an Internet-Draft and is subject to all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of 6 months and may be updated, replaced, or may become obsolete by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as work in progress. To view the entire list of current Internet-Drafts, please check the \"1id-abstracts.txt\" listing contained in the Internet-Drafts Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). Abstract This document presents a framework to assist the writers of certificate policies or certification practice statements for participants within public key infrastructures, such as certification authorities, policy authorities, and communities of interest that wish to rely on certificates. In particular, the framework provides a comprehensive list of topics that potentially (at the writer's discretion) need to be covered in a certificate policy or a certification practice statement. This document is being submitted to the RFC Editor with a request for publication as an Informational RFC that will supercede RFC 2527 [CPF].",
"title": ""
},
{
"docid": "ea52c884ddfb34ce3336f6795455ddbe",
"text": "In this paper we introduce Smooth Particle Networks (SPNets), a framework for integrating fluid dynamics with deep networks. SPNets adds two new layers to the neural network toolbox: ConvSP and ConvSDF, which enable computing physical interactions with unordered particle sets. We use these layers in combination with standard neural network layers to directly implement fluid dynamics inside a deep network, where the parameters of the network are the fluid parameters themselves (e.g., viscosity, cohesion, etc.). Because SPNets are implemented as a neural network, the resulting fluid dynamics are fully differentiable. We then show how this can be successfully used to learn fluid parameters from data, perform liquid control tasks, and learn policies to manipulate liquids.",
"title": ""
},
{
"docid": "366f31829bb1ac55d195acef880c488e",
"text": "Intense competition among a vast number of group-buying websites leads to higher product homogeneity, which allows customers to switch to alternative websites easily and reduce their website stickiness and loyalty. This study explores the antecedents of user stickiness and loyalty and their effects on consumers’ group-buying repurchase intention. Results indicate that systems quality, information quality, service quality, and alternative system quality each has a positive relationship with user loyalty through user stickiness. Meanwhile, information quality directly impacts user loyalty. Thereafter, user stickiness and loyalty each has a positive relationship with consumers’ repurchase intention. Theoretical and managerial implications are also discussed.",
"title": ""
},
{
"docid": "30c67c52cb258f86998263b378e0c66d",
"text": "This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of $1082\\times 728$ . The executable code and our collected data set are publicly available.",
"title": ""
},
{
"docid": "abb7dceb1bd532c31029b5030c9a12e3",
"text": "In this paper, we present a real time method based on some video and image processing algorithms for eye blink detection. The motivation of this research is the need of disabling who cannot control the calls with human mobile interaction directly without the need of hands. A Haar Cascade Classifier is applied for face and eye detection for getting eye and facial axis information. In addition, the same classifier is used based on Haarlike features to find out the relationship between the eyes and the facial axis for positioning the eyes. An efficient eye tracking method is proposed which uses the position of detected face. Finally, an eye blinking detection based on eyelids state (close or open) is used for controlling android mobile phones. The method is used with and without smoothing filter to show the improvement of detection accuracy. The application is used in real time for studying the effect of light and distance between the eyes and the mobile device in order to evaluate the accuracy detection and overall accuracy of the system. Test results show that our proposed method provides a 98% overall accuracy and 100% detection accuracy for a distance of 35 cm and an artificial light. Keywords—eye detection; eye tracking; eye blinking; smoothing filter; detection accuracy",
"title": ""
},
{
"docid": "850cb2c41ef9e42df458156c4000f507",
"text": "A VANET is a network where each node represents a vehicle equipped with wireless communication technology. This type of network enhances road safety, traffic efficiency, Internet access and many others applications to minimize environmental impact and in general maximize the benefits for the road users. This paper studies a relevant problem in VANETs, known as the deployment of RSUs. A RSU is an access points, used together with the vehicles, to allow information dissemination in the roads. Knowing where to place these RSUs so that a maximum number of vehicles circulating is covered is a challenge. We model the problem as a Maximum Coverage with Time Threshold Problem (MCTTP), and use a genetic algorithm to solve it. The algorithm is tested in four real-world datasets, and compared to a greedy approach previously proposed in the literature. The results show that our approach finds better results than the greedy in all scenarios, with gains up to 11 percentage points.",
"title": ""
},
{
"docid": "240c24e9fb564eba961d74f1d45cdbde",
"text": "In recent years, the rise of digital image and video data available has led to an increasing demand for image annotation. In this paper, we propose an interactive object annotation method that incrementally trains an object detector while the user provides annotations. In the design of the system, we have focused on minimizing human annotation time rather than pure algorithm learning performance. To this end, we optimize the detector based on a realistic annotation cost model based on a user study. Since our system gives live feedback to the user by detecting objects on the fly and predicts the potential annotation costs of unseen images, data can be efficiently annotated by a single user without excessive waiting time. In contrast to popular tracking-based methods for video annotation, our method is suitable for both still images and video. We have evaluated our interactive annotation approach on three datasets, ranging from surveillance, television, to cell microscopy.",
"title": ""
}
] |
scidocsrr
|
b6e6dc7c18b76d7b52ebb52728aec1e2
|
Industrial robot capability models for agile manufacturing
|
[
{
"docid": "475c6c50d2b8e0a3f66628412f5bcf34",
"text": "Task allocation is an important aspect of many multi-robot systems. The features and complexity of multi-robot task allocation (MRTA) problems are dictated by the requirements of the particular domain under consideration. These problems can range from those involving instantaneous distribution of simple, independent tasks among members of a homogenous team, to those requiring the time-extended scheduling of complex interrelated multi-step tasks for a members of a heterogenous team related by several constraints. The existing widely-used taxonomy for task allocation in multi-robot systems addresses only problems with independent tasks and does not deal with problems with interrelated utilities and constraints. A survey of recent work in multi-robot task allocation reveals that this is a significant deficiency with respect to realistic multi-robot task allocation problems. Thus, in this paper, we present a new, comprehensive taxonomy, iTax, that explicitly takes into consideration the issues of interrelated utilities and constraints. Our taxonomy maps categories of MRTA problems to existing mathematical models from combinatorial optimization and operations research, and hence draws important parallels between robotics and these fields.",
"title": ""
}
] |
[
{
"docid": "4487f3713062ef734ceab5c7f9ccc6e3",
"text": "In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.",
"title": ""
},
{
"docid": "361ce46466dd2a1d25b31beee7884cd6",
"text": "As text semantics has an important role in text meaning, the term semantics has been seen in a vast sort of text mining studies. However, there is a lack of studies that integrate the different research branches and summarize the developed works. This paper reports a systematic mapping about semantics-concerned text mining studies. This systematic mapping study followed a well-defined protocol. Its results were based on 1693 studies, selected among 3984 studies identified in five digital libraries. The produced mapping gives a general summary of the subject, points some areas that lacks the development of primary or secondary studies, and can be a guide for researchers working with semantics-concerned text mining. It demonstrates that, although several studies have been developed, the processing of semantic aspects in text mining remains an open research problem.",
"title": ""
},
{
"docid": "7fc3dfcc8fa43c36938f41877a65bed7",
"text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1",
"title": ""
},
{
"docid": "41481b2f081831d28ead1b685465d535",
"text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.",
"title": ""
},
{
"docid": "53343bc045189bf7578619e7d60a36ba",
"text": "Financial technology (FinTech) is the new business model and technology which aims to compete with traditional financial services and blockchain is one of most famous technology use of FinTech. Blockchain is a type of distributed, electronic database (ledger) which can hold any information (e.g. records, events, transactions) and can set rules on how this information is updated. The most well-known application of blockchain is bitcoin, which is a kind of cryptocurrencies. But it can also be used in many other financial and commercial applications. A prominent example is smart contracts, for instance as offered in Ethereum. A contract can execute a transfer when certain events happen, such as payment of a security deposit, while the correct execution is enforced by the consensus protocol. The purpose of this paper is to explore the research and application landscape of blockchain technology acceptance by following a more comprehensive approach to address blockchain technology adoption. This research is to propose a unified model integrating Innovation Diffusion Theory (IDT) model and Technology Acceptance Model (TAM) to investigate continuance intention to adopt blockchain technology.",
"title": ""
},
{
"docid": "e0f5f73eb496b77cddc5820fb6306f4b",
"text": "Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenét-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.",
"title": ""
},
{
"docid": "c0d8f6f343f2602d9a32ba228f51f315",
"text": "Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree induction, Bayesian networks, k-nearest neighbor classifier, case-based reasoning, genetic algorithm and fuzzy logic techniques. The goal of this survey is to provide a comprehensive review of different classification techniques in data mining.",
"title": ""
},
{
"docid": "ac9bfa64fa41d4f22fc3c45adaadb099",
"text": "Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.",
"title": ""
},
{
"docid": "10ebcd3a97863037b5bdab03c06dd0e1",
"text": "Nonlinear dynamical systems are ubiquitous in science and engineering, yet many issues still exist related to the analysis and prediction of these systems. Koopman theory circumvents these issues by transforming the finite-dimensional nonlinear dynamics to a linear dynamical system of functions in an infinite-dimensional Hilbert space of observables. The eigenfunctions of the Koopman operator evolve linearly in time and thus provide a natural coordinate system for simplifying the dynamical behaviors of the system. We consider a family of observable functions constructed by projecting the delay coordinates of the system onto the eigenvectors of the autocorrelation function, which can be regarded as continuous SVD basis vectors for time-delay observables. We observe that these functions are the most parsimonious basis of observables for a system with Koopman mode decomposition of order N , in the sense that the associated Koopman eigenfunctions are guaranteed to lie in the span of the first N of these coordinates. We conjecture and prove a number of theoretical results related to the quality of these approximations in the more general setting where the system has mixed spectra or the coordinates are otherwise insufficient to capture the full spectral information. We prove a related and very general result that the dynamics of the observables generated by projecting delay coordinates onto an arbitrary orthonormal basis are systemindependent and depend only on the choice of basis, which gives a highly efficient way of computing representations of the Koopman operator in these coordinates. We show that this formalism provides a theoretical underpinning for the empirical results in [8], which found that chaotic dynamical systems can be approximately factored into intermittently forced linear systems when viewed in delay coordinates. Finally, we compute these time delay observables for a number of example dynamical systems and show that empirical results match our theory.",
"title": ""
},
{
"docid": "0846f7d40f5cbbd4c199dfb58c4a4e7d",
"text": "While active learning has drawn broad attention in recent years, there are relatively few studies on stopping criterion for active learning. We here propose a novel model stability based stopping criterion, which considers the potential of each unlabeled examples to change the model once added to the training set. The underlying motivation is that active learning should terminate when the model does not change much by adding remaining examples. Inspired by the widely used stochastic gradient update rule, we use the gradient of the loss at each candidate example to measure its capability to change the classifier. Under the model change rule, we stop active learning when the changing ability of all remaining unlabeled examples is less than a given threshold. We apply the stability-based stopping criterion to two popular classifiers: logistic regression and support vector machines (SVMs). It can be generalized to a wide spectrum of learning models. Substantial experimental results on various UCI benchmark data sets have demonstrated that the proposed approach outperforms state-of-art methods in most cases.",
"title": ""
},
{
"docid": "1347bdea8cff516b66e9748ee0ea5aed",
"text": "The image enhancement methods based on histogram equalization (HE) often fail to improve local information and sometimes have the fatal flaw of over-enhancement when a quantum jump occurs in the cumulative distribution function of the histogram. To overcome these shortcomings, we propose an image enhancement method based on a modified Laplacian pyramid framework that decomposes an image into band-pass images to improve both the global contrast and local information. For the global contrast, a novel robust HE is proposed to provide a well-balanced mapping function which effectively suppresses the quantum jump. For the local information, noise-reduced and adaptively gained high-pass images are applied to the resultant image. In qualitative and quantitative comparisons through experimental results, the proposed method shows natural and robust image quality and suitability for video sequences, achieving generally higher performance when compared to existing methods.",
"title": ""
},
{
"docid": "651cbd4ba53f51a7ed20b9105f3ffe44",
"text": "We addressed the problem of discriminating between 24 diseased and 17 healthy specimens on the basis of protein mass spectra. To prepare the data, we performed mass to charge ratio (m/z) normalization, baseline elimination, and conversion of absolute peak height measures to height ratios. After preprocessing, the major difficulty encountered was the extremely large number of variables (1676 m/z values) versus the number of examples (41). Dimensionality reduction was treated as an integral part of the classification process; variable selection was coupled with model construction in a single ten-fold cross-validation loop. We explored different experimental setups involving two peak height representations, two variable selection methods, and six induction algorithms, all on both the original 1676-mass data set and on a prescreened 124-mass data set. Highest predictive accuracies (1-2 off-sample misclassifications) were achieved by a multilayer perceptron and Naïve Bayes, with the latter displaying more consistent performance (hence greater reliability) over varying experimental conditions. We attempted to identify the most discriminant peaks (proteins) on the basis of scores assigned by the two variable selection methods and by neural network based sensitivity analysis. These three scoring schemes consistently ranked four peaks as the most relevant discriminators: 11683, 1403, 17350 and 66107.",
"title": ""
},
{
"docid": "8314487867961ae2572997e2a7315c9c",
"text": "Social cognitive neuroscience examines social phenomena and processes using cognitive neuroscience research tools such as neuroimaging and neuropsychology. This review examines four broad areas of research within social cognitive neuroscience: (a) understanding others, (b) understanding oneself, (c) controlling oneself, and (d) the processes that occur at the interface of self and others. In addition, this review highlights two core-processing distinctions that can be neurocognitively identified across all of these domains. The distinction between automatic versus controlled processes has long been important to social psychological theory and can be dissociated in the neural regions contributing to social cognition. Alternatively, the differentiation between internally-focused processes that focus on one's own or another's mental interior and externally-focused processes that focus on one's own or another's visible features and actions is a new distinction. This latter distinction emerges from social cognitive neuroscience investigations rather than from existing psychological theories demonstrating that social cognitive neuroscience can both draw on and contribute to social psychological theory.",
"title": ""
},
{
"docid": "841a5ecba126006e1deb962473662788",
"text": "In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.",
"title": ""
},
{
"docid": "45719c2127204b4eb169fccd2af0bf82",
"text": "A face hallucination algorithm is proposed to generate high-resolution images from JPEG compressed low-resolution inputs by decomposing a deblocked face image into structural regions such as facial components and non-structural regions like the background. For structural regions, landmarks are used to retrieve adequate high-resolution component exemplars in a large dataset based on the estimated head pose and illumination condition. For non-structural regions, an efficient generic super resolution algorithm is applied to generate high-resolution counterparts. Two sets of gradient maps extracted from these two regions are combined to guide an optimization process of generating the hallucination image. Numerous experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art hallucination methods on JPEG compressed face images with different poses, expressions, and illumination conditions.",
"title": ""
},
{
"docid": "7256d6c5bebac110734275d2f985ab31",
"text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.",
"title": ""
},
{
"docid": "672be163a987da17aca6ccbdbc4b9145",
"text": "Clothing detection is an important step for retrieving similar clothing items, organizing fashion photos, artificial intelligence powered shopping assistants and automatic labeling of large catalogues. Training a deep learning based clothing detector requires pre-defined categories (dress, pants etc) and a high volume of annotated image data for each category. However, fashion evolves and new categories are constantly introduced in the marketplace. For example, consider the case of jeggings which is a combination of jeans and leggings. Detection of this new category will require adding annotated data specific to jegging class and subsequently relearning the weights for the deep network. In this paper, we propose a novel object detection method that can handle newer categories without the need of obtaining new labeled data and retraining the network. Our approach learns the visual similarities between various clothing categories and predicts a tree of categories. The resulting framework significantly improves the generalization capabilities of the detector to novel clothing products.",
"title": ""
},
{
"docid": "036cbf58561de8bfa01ddc4fa8d7b8f2",
"text": "The purpose of this paper is to discover a semi-optimal set of trading rules and to investigate its effectiveness as applied to Egyptian Stocks. The aim is to mix different categories of technical trading rules and let an automatic evolution process decide which rules are to be used for particular time series. This difficult task can be achieved by using genetic algorithms (GA's), they permit the creation of artificial experts taking their decisions from an optimal subset of the a given set of trading rules. The GA's based on the survival of the fittest, do not guarantee a global optimum but they are known to constitute an effective approach in optimizing non-linear functions. Selected liquid stocks are tested and GA trading rules were compared with other conventional and well known technical analysis rules. The Proposed GA system showed clear better average profit and in the same high sharpe ratio, which indicates not only good profitability but also better risk-reward trade-off",
"title": ""
},
{
"docid": "f120d34996b155a413247add6adc6628",
"text": "The storage and computation requirements of Convolutional Neural Networks (CNNs) can be prohibitive for exploiting these models over low-power or embedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmented with a sparsity-promoting penalty term. The sparsity structure of the network is identified using the Alternating Direction Method of Multipliers (ADMM), which is widely used in large optimization problems. This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-inducing penalty functions to decompose the minimization problem into sub-problems that can be solved sequentially. Applying our method to a variety of state-of-the-art CNN models, our proposed method is able to simplify the original model, generating models with less computation and fewer parameters, while maintaining and often improving generalization performance. Accomplishments on a variety of models strongly verify that our proposed ADMM-based method can be a very useful tool for simplifying and improving deep CNNs.",
"title": ""
},
{
"docid": "d00c9d6286f8c061db52f898d16acb2f",
"text": "There is substantial interest in the role of plant secondary metabolites as protective dietary agents. In particular, the involvement of flavonoids and related compounds has become a major topic in human nutrition research. Evidence from epidemiological and human intervention studies is emerging regarding the protective effects of various (poly)phenol-rich foods against several chronic diseases, including neurodegeneration, cancer and cardiovascular diseases. In recent years, the use of HPLC–MS for the analysis of flavonoids and related compounds in foods and biological samples has significantly enhanced our understanding of (poly)phenol bioavailability. These advancements have also led to improvements in the available food composition and metabolomic databases, and consequently in the development of biomarkers of (poly)phenol intake to use in epidemiological studies. Efforts to create adequate standardised materials and well-matched controls to use in randomised controlled trials have also improved the quality of the available data. In vitro investigations using physiologically achievable concentrations of (poly)phenol metabolites and catabolites with appropriate model test systems have provided new and interesting insights on potential mechanisms of actions. This article will summarise recent findings on the bioavailability and biological activity of (poly)phenols, focusing on the epidemiological and clinical evidence of beneficial effects of flavonoids and related compounds on urinary tract infections, cognitive function and age-related cognitive decline, cancer and cardiovascular disease.",
"title": ""
}
] |
scidocsrr
|
0fb79131e390b1e0dbb9eb58f003019c
|
A co-training method for identifying the same person across social networks
|
[
{
"docid": "8e5eb10f2a9d632e9918c68060bac9f8",
"text": "People today typically use multiple online social networks (Facebook, Twitter, Google+, LinkedIn, etc.). Each online network represents a subset of their “real” ego-networks. An interesting and challenging problem is to reconcile these online networks, that is, to identify all the accounts belonging to the same individual. Besides providing a richer understanding of social dynamics, the problem has a number of practical applications. At first sight, this problem appears algorithmically challenging. Fortunately, a small fraction of individuals explicitly link their accounts across multiple networks; our work leverages these connections to identify a very large fraction of the network. Our main contributions are to mathematically formalize the problem for the first time, and to design a simple, local, and efficient parallel algorithm to solve it. We are able to prove strong theoretical guarantees on the algorithm’s performance on well-established network models (Random Graphs, Preferential Attachment). We also experimentally confirm the effectiveness of the algorithm on synthetic and real social network data sets.",
"title": ""
},
{
"docid": "99a728e8b9a351734db9b850fe79bd61",
"text": "Predicting anchor links across social networks has important implications to an array of applications, including cross-network information diffusion and cross-domain recommendation. One challenging problem is: whether and to what extent we can address the anchor link prediction problem, if only structural information of networks is available. Most existing methods, unsupervised or supervised, directly work on networks themselves rather than on their intrinsic structural regularities, and thus their effectiveness is sensitive to the high dimension and sparsity of networks. To offer a robust method, we propose a novel supervised model, called PALE, which employs network embedding with awareness of observed anchor links as supervised information to capture the major and specific structural regularities and further learns a stable cross-network mapping for predicting anchor links. Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "57cdf599b147bab983ffca8ddd0aa62b",
"text": "Usernames are ubiquitously used for identification and authentication purposes on web services and the Internet at large, ranging from the local-part of email addresses to identifiers in social networks. Usernames are generally alphanumerical strings chosen by the users and, by design, are unique within the scope of a single organization or web service. In this paper we investigate the feasibility of using usernames to trace or link multiple profiles across services that belong to the same individual. The intuition is that the probability that two usernames refer to the same physical person strongly depends on the “entropy” of the username string itself. Our experiments, based on usernames gathered from real web services, show that a significant portion of the users’ profiles can be linked using their usernames. In collecting the data needed for our study, we also show that users tend to choose a small number of related usernames and use them across many services. To the best of our knowledge, this is the first time that usernames are considered as a source of information when profiling users on the Internet.",
"title": ""
},
{
"docid": "27513d1309f370e9bd8426d0d9971447",
"text": "Online social networks can often be represented as heterogeneous information networks containing abundant information about: who, where, when and what. Nowadays, people are usually involved in multiple social networks simultaneously. The multiple accounts of the same user in different networks are mostly isolated from each other without any connection between them. Discovering the correspondence of these accounts across multiple social networks is a crucial prerequisite for many interesting inter-network applications, such as link recommendation and community analysis using information from multiple networks. In this paper, we study the problem of anchor link prediction across multiple heterogeneous social networks, i.e., discovering the correspondence among different accounts of the same user. Unlike most prior work on link prediction and network alignment, we assume that the anchor links are one-to-one relationships (i.e., no two edges share a common endpoint) between the accounts in two social networks, and a small number of anchor links are known beforehand. We propose to extract heterogeneous features from multiple heterogeneous networks for anchor link prediction, including user's social, spatial, temporal and text information. Then we formulate the inference problem for anchor links as a stable matching problem between the two sets of user accounts in two different networks. An effective solution, MNA (Multi-Network Anchoring), is derived to infer anchor links w.r.t. the one-to-one constraint. Extensive experiments on two real-world heterogeneous social networks show that our MNA model consistently outperform other commonly-used baselines on anchor link prediction.",
"title": ""
}
] |
[
{
"docid": "8a64ef929a98a27c013f7e7edcfed594",
"text": "In systems neuroscience there is a big gap between what theorists postulate (i.e., grand unifying theories about the general principles underlying cortical processes such as the predictive processing account) and what empiricists measure (i.e., reaction times, pupil dilations, blood-oxygenated level dependent signal in brain areas, magnetic pulses). It is becoming increasingly difficult for the theorists to come up with empirically testable hypotheses and for the empiricists to use their findings to confirm or refute a theory. We propose a research methodology based on robot simulations that may help bridge that gap. The methodology is summarized by four keywords: Formalize verbal theories into computational models; Operationalize this computational model into a working robot implementation; Explore the consequences of various design choices and parameter settings to generate empirically testable hypotheses; and finally Study these hypotheses in behavioral or imaging experiments. We lay out a research program that aims at investigating various open issues in predictive processing and exemplify our approach in a simple case study.",
"title": ""
},
{
"docid": "956b7139333421343e8ed245a63a7b4b",
"text": "Purpose – During the last decades, different quality management concepts, including total quality management (TQM), six sigma and lean, have been applied by many different organisations. Although much important work has been documented regarding TQM, six sigma and lean, a number of questions remain concerning the applicability of these concepts in various organisations and contexts. Hence, the purpose of this paper is to describe the similarities and differences between the concepts, including an evaluation and criticism of each concept. Design/methodology/approach – Within a case study, a literature review and face-to-face interviews in typical TQM, six sigma and lean organisations have been carried out. Findings – While TQM, six sigma and lean have many similarities, especially concerning origin, methodologies, tools and effects, they differ in some areas, in particular concerning the main theory, approach and the main criticism. The lean concept is slightly different from TQM and six sigma. However, there is a lot to gain if organisations are able to combine these three concepts, as they are complementary. Six sigma and lean are excellent road-maps, which could be used one by one or combined, together with the values in TQM. Originality/value – The paper provides guidance to organisations regarding the applicability and properties of quality concepts. Organisations need to work continuously with customer-orientated activities in order to survive; irrespective of how these activities are labelled. The paper will also serve as a basis for further research in this area, focusing on practical experience of these concepts.",
"title": ""
},
{
"docid": "8988a648262b396bf20489eb92f32110",
"text": "Hyaluronic acid (HA), the main component of extracellular matrix, is considered one of the key players in the tissue regeneration process. It has been proven to modulate via specific HA receptors, inflammation, cellular migration, and angiogenesis, which are the main phases of wound healing. Studies have revealed that most HA properties depend on its molecular size. High molecular weight HA displays anti-inflammatory and immunosuppressive properties, whereas low molecular weight HA is a potent proinflammatory molecule. In this review, the authors summarize the role of HA polymers of different molecular weight in tissue regeneration and provide a short overview of main cellular receptors involved in HA signaling. In addition, the role of HA in 2 major steps of wound healing is examined: inflammation and the angiogenesis process. Finally, the antioxidative properties of HA are discussed and its possible clinical implication presented.",
"title": ""
},
{
"docid": "892661d87138d49aab2a54b7557a7021",
"text": "Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the CaltechUCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.",
"title": ""
},
{
"docid": "688ee7a4bde400a6afbd6972d729fad4",
"text": "Learning-to-Rank ( LtR ) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of stateof-the-art LtR , and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees ( GBRT ), Lambda-Mart ( λ-MART ), and the first public-domain implementation of Oblivious Lambda-Mart ( λ-MART ), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the qualitycost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget. © 2016 Elsevier Ltd. All rights reserved. ∗ Corresponding author. E-mail addresses: gabriele.capannini@mdh.se (G. Capannini), claudio.lucchese@isti.cnr.it , c.lucchese@isti.cnr.it (C. Lucchese), f.nardini@isti.cnr.it (F.M. Nardini), orlando@unive.it (S. Orlando), r.perego@isti.cnr.it (R. Perego), n.tonellotto@isti.cnr.it (N. Tonellotto). http://dx.doi.org/10.1016/j.ipm.2016.05.004 0306-4573/© 2016 Elsevier Ltd. All rights reserved. Please cite this article as: G. Capannini et al., Quality versus efficiency in document scoring with learning-to-rank models, Information Processing and Management (2016), http://dx.doi.org/10.1016/j.ipm.2016.05.004 2 G. Capannini et al. / Information Processing and Management 0 0 0 (2016) 1–17 ARTICLE IN PRESS JID: IPM [m3Gsc; May 17, 2016;19:28 ] Document Index Base Ranker Top Ranker Features Learning to Rank Algorithm Query First step Second step N docs K docs 1. ............ 2. ............ 3. ............ K. ............ ... ... Results Page(s) Fig. 1. The architecture of a generic machine-learned ranking pipeline.",
"title": ""
},
{
"docid": "83958247682e3400b8ce2765130e1386",
"text": "Deep neural networks have been shown to perform well in many classical machine learning problems, especially in image classification tasks. However, researchers have found that neural networks can be easily fooled, and they are surprisingly sensitive to small perturbations imperceptible to humans. Carefully crafted input images (adversarial examples) can force a well-trained neural network to provide arbitrary outputs. Including adversarial examples during training is a popular defense mechanism against adversarial attacks. In this paper we propose a new defensive mechanism under the generative adversarial network (GAN) framework. We model the adversarial noise using a generative network, trained jointly with a classification discriminative network as a minimax game. We show empirically that our adversarial network approach works well against black box attacks, with performance on par with state-of-art methods such as ensemble adversarial training and adversarial training with projected gradient descent.",
"title": ""
},
{
"docid": "35a0044724854f6fabeb777f80c8acd8",
"text": "Liposuction is one of the most commonly performed aesthetic procedures. It is performed worldwide as an outpatient procedure. However, the complications are underestimated and underreported by caregivers. We present a case of delayed diagnosis of bilothorax secondary to liver and gallbladder injury after tumescent liposuction. A 26-year-old female patient was transferred to our emergency department from an aesthetic clinic with worsening dyspnea, tachypnea and fatigue. She had undergone extensive liposuction of the thighs, buttocks, back and abdomen 5 days prior to presentation. A chest X-ray showed significant right-sided pleural effusion. Thoracentesis was performed and drained bilious fluid. CT scan of the abdomen revealed pleural, liver and gall bladder injury. An exploratory laparoscopy confirmed the findings, the collections were drained; cholecystectomy and intraoperative cholangiogram were performed. The patient did very well postoperatively and was discharged home in 2 days. Even though liposuction is considered a simple office-based procedure, its complications can be fatal. The lack of strict laws that exclusively place this procedure in the hands of medical professionals allow these procedures to still be done by less experienced hands and in outpatient-based settings. Our case serves to highlight yet another unique but potentially fatal complication of liposuction. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "770356d4626698242f9fa78b89109d1b",
"text": "The creation of robust mechanisms for uncertain inference is central to the development of Artificial General Intelligence systems. While probability theory provides a principled foundation for uncertain inference, the mathematics of probability theory has not yet been developed to the point where it is possible to handle every aspect of the uncertain inference process in practical situations using rigorous probabilistic calculations. Due to the need to operate within realistic computational resources, probability theory presently requires augmentation with heuristics in order to be pragmatic for general intelligence (as well as for other purposes such as large-scale data analysis). The authors have been involved with the creation of a novel, general framework for pragmatic probabilistic inference in an AGI context, called Probabilistic Logic Networks (PLN). PLN integrates probability theory with a variety of heuristic inference mechanisms; it encompasses a rich set of first-order and higher-order inference rules, and it is highly flexible and adaptive, and easily configurable. This paper describes a single, critical aspect of the PLN framework, which has to with the quantification of uncertainty. In short, it addresses the question: What should an uncertain truth value be, so that a general intelligence may use it for pragmatic reasoning? We propose a new approach to quantifying uncertainty via a hybridization of Walley’s theory of imprecise probabilities and Bayesian credible intervals. This “indefinite probability” approach provides a general method for calculating the “weight-of-evidence” underlying the conclusions of uncertain inferences. Moreover, both Walley’s imprecise beta-binomial model and standard Bayesian inference can be viewed mathematically as special cases of the more general indefinite probability model. Via exemplifying the use of indefinite probabilities in a variety of PLN inference rules (including exact and heuristic ones), we argue that this mode of quantifying uncertainty may be adequate to serve as an ingredient of powerful artificial general intelligence.",
"title": ""
},
{
"docid": "5a7b68c341e20d5d788e46c089cfd855",
"text": "This study aims at investigating alcoholic inpatients' attachment system by combining a measurement of adult attachment style (AAQ, Hazan and Shaver, 1987. Journal of Personality and Social Psychology, 52(3): 511-524) and the degree of alexithymia (BVAQ, Bermond and Vorst, 1998. Bermond-Vorst Alexithymia Questionnaire, Unpublished data). Data were collected from 101 patients (71 men, 30 women) admitted to a psychiatric hospital in Belgium for alcohol use-related problems, between September 2003 and December 2004. To investigate the research question, cluster analyses and regression analyses are performed. We found that it makes sense to distinguish three subgroups of alcoholic inpatients with different degrees of impairment of the attachment system. Our results also reveal a pattern of correspondence between the severity of psychiatric symptoms-personality disorder traits (ADP-IV), anxiety (STAI), and depression (BDI-II-Nl)-and the severity of the attachment system's impairment. Limitations of the study and suggestions for further research are highlighted and implications for diagnosis and treatment are discussed.",
"title": ""
},
{
"docid": "d52c31b947ee6edf59a5ef416cbd0564",
"text": "Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as background priors, it does not work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency-based method to detect the visual objects by using the background priors. For a video, we integrate multiple pairs of scale-invariant feature transform flows from long-range frames, and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph-based structure using spatiotemporal background priors is put forward in the computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging data sets show that the proposed method robustly and accurately detects the video objects in both simple and complex scenes and achieves better performance compared with other the state-of-the-art video saliency models.",
"title": ""
},
{
"docid": "73080f337ae7ec5ef0639aec374624de",
"text": "We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called \"Multi-Atlas Label Propagation with Expectation-Maximisation based refinement\" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.",
"title": ""
},
{
"docid": "9018c146d532071e7953cdc79d8ba2c0",
"text": "The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.",
"title": ""
},
{
"docid": "ba32aa9b4dbd92f7998342d8f68cabf7",
"text": "Gomez-Lopez-Hernandez syndrome is a very rare genetic disorder with a distinct phenotype (OMIM 601853). To our knowledge there have been seven cases documented to date. We report on an additional male patient now aged 15 8/12 years with synostosis of the lambdoid suture, partial scalp alopecia, corneal opacity, mental retardation and striking phenotypic features (e.g., brachyturricephaly, hypertelorism, midface hypoplasia and low-set ears) consistent with Gomez-Lopez-Hernandez syndrome. In early childhood the patient demonstrated aggressive behavior and raging periods. He also had seizures that were adequately controlled by medication. Magnetic resonance imaging (MRI) revealed rhombencephalosynapsis, i.e., a rare fusion of the cerebellar hemispheres, also consistent with Gomez-Lopez-Hernandez syndrome. In addition a lipoma of the quadrigeminal plate was observed, a feature not previously described in the seven patients reported in the literature. Cytogenetic and subtelomere analyses were inconspicuous. Microarray-based comparative genomic hybridization (array-CGH) testing revealed five aberrations (partial deletions of 1p21.1, 8q24.23, 10q11.2, Xq26.3 and partial duplication of 19p13.2), which, however, have been classified as normal variants. Array-CGH has not been published in the previously reported children. The combination of certain craniofacial features, including partial alopecia, and the presence of rhombencephalosynapsis in the MRI are suggestive of Gomez-Lopez-Hernandez syndrome. Children with this syndrome should undergo a certain social pediatric protocol including EEG diagnostics, ophthalmological investingation, psychological testing, management of behavioral problems and genetic counseling.",
"title": ""
},
{
"docid": "ed13193df5db458d0673ccee69700bc0",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "469e3a398e0d2772467fd14e5dd44d8b",
"text": "We present a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images. The distinguishing feature of our approach is its generality; it does not rely on a specific parametric reflectance model and is therefore purely ldquodata-drivenrdquo. This is achieved by employing novel bi-variate approximations of isotropic reflectance functions. By combining this new approximation with recent developments in photometric stereo, we are able to simultaneously estimate an independent surface normal at each point, a global set of non-parametric ldquobasis materialrdquo BRDFs, and per-point material weights. Our experimental results validate the approach and demonstrate the utility of bi-variate reflectance functions for general non-parametric appearance capture.",
"title": ""
},
{
"docid": "a59c73927c732521cfb58385b58fab32",
"text": "BACKGROUND\nFrequent use of Facebook and other social networks is thought to be associated with certain behavioral changes, and some authors have expressed concerns about its possible detrimental effect on mental health. In this work, we investigated the relationship between social networking and depression indicators in adolescent population.\n\n\nSUBJECTS AND METHODS\nTotal of 160 high school students were interviewed using an anonymous, structured questionnaire and Back Depression Inventory - second edition (BDI-II-II). Apart from BDI-II-II, students were asked to provide the data for height and weight, gender, average daily time spent on social networking sites, average time spent watching TV, and sleep duration in a 24-hour period.\n\n\nRESULTS\nAverage BDI-II-II score was 8.19 (SD=5.86). Average daily time spent on social networking was 1.86 h (SD=2.08 h), and average time spent watching TV was 2.44 h (SD=1.74 h). Average body mass index of participants was 21.84 (SD=3.55) and average sleep duration was 7.37 (SD=1.82). BDI-II-II score indicated minimal depression in 104 students, mild depression in 46 students, and moderate depression in 10 students. Statistically significant positive correlation (p<0.05, R=0.15) was found between BDI-II-II score and the time spent on social networking.\n\n\nCONCLUSIONS\nOur results indicate that online social networking is related to depression. Additional research is required to determine the possible causal nature of this relationship.",
"title": ""
},
{
"docid": "607cd26b9c51b5b52d15087d0e6662cb",
"text": "Pseudo-NMOS level-shifters consume large static current making them unsuitable for portable devices implemented with HV CMOS. Dynamic level-shifters help reduce power consumption. To reduce on-current to a minimum (sub-nanoamp), modifications are proposed to existing pseudo-NMOS and dynamic level-shifter circuits. A low power three transistor static level-shifter design with a resistive load is also presented.",
"title": ""
},
{
"docid": "d6628b102e8f87e8ce58c2e3483a7beb",
"text": "Nowadays, Big Data platforms allow the analysis of massive data streams in an efficient way. However, the services they provide are often too raw, thus the implementation of advanced real-world applications requires a non-negligible effort for interfacing with such services. This also complicates the task of choosing which one of the many available alternatives is the most appropriate for the application at hand. In this paper, we present a comparative study of the three major opensource Big Data platforms for stream processing, as performed by using our novel RAMS framework. Although the results we present are specific for our use case (recognition of suspect people from massive video streams), the generality of the RAMS framework allows both considering such results as valid for similar applications and implementing different use cases on top of Big Data platforms with very limited effort.",
"title": ""
},
{
"docid": "d18f9954bc8140fbf18e723f80523e8f",
"text": "A wideband circularly polarized reconfigurable patch antenna with L-shaped feeding probes is presented, which can generate unidirectional radiation performance that is switchable between left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). To realize this property, an L-probe fed square patch antenna is chosen as the radiator. A compact reconfigurable feeding network is implemented to excite the patch and generate either LHCP or RHCP over a wide operating bandwidth. The proposed antenna achieves the desired radiation patterns and has excellent characteristics, including a wide bandwidth, a compact structure, and a low profile. Measured results exhibit approximately identical performance for both polarization modes. Wide impedance, 31.6% from 1.2 to 1.65 GHz, and axial-ratio, 20.8% from 1.29 to 1.59 GHz, bandwidths are obtained. The gain is very stable across the entire bandwidth with a 6.9-dBic peak value. The reported circular-polarization reconfigurable antenna can mitigate the polarization mismatching problem in multipath wireless environments, increase the channel capacity of the system, and enable polarization coding.",
"title": ""
},
{
"docid": "b712552d760c887131f012e808dca253",
"text": "To the same utterance, people’s responses in everyday dialogue may be diverse largely in terms of content semantics, speaking styles, communication intentions and so on. Previous generative conversational models ignore these 1-to-n relationships between a post to its diverse responses, and tend to return high-frequency but meaningless responses. In this study we propose a mechanism-aware neural machine for dialogue response generation. It assumes that there exists some latent responding mechanisms, each of which can generate different responses for a single input post. With this assumption we model different responding mechanisms as latent embeddings, and develop a encoder-diverter-decoder framework to train its modules in an end-to-end fashion. With the learned latent mechanisms, for the first time these decomposed modules can be used to encode the input into mechanism-aware context, and decode the responses with the controlled generation styles and topics. Finally, the experiments with human judgements, intuitive examples, detailed discussions demonstrate the quality and diversity of the generated responses with 9.80% increase of acceptable ratio over the best of six baseline methods.",
"title": ""
}
] |
scidocsrr
|
c5d32a7368daeca500124b54e2886523
|
Passive Radar Detection With Noisy Reference Channel Using Principal Subspace Similarity
|
[
{
"docid": "df354ff3f0524d960af7beff4ec0a68b",
"text": "The paper presents digital beamforming for Passive Coherent Location (PCL) radar. The considered circular antenna array is a part of a passive system developed at Warsaw University of Technology. The system is based on FM radio transmitters. The array consists of eight half-wave dipoles arranged in a circular array covering 360deg with multiple beams. The digital beamforming procedure is presented, including mutual coupling correction and antenna pattern optimization. The results of field calibration and measurements are also shown.",
"title": ""
}
] |
[
{
"docid": "0ef6e54d7190dde80ee7a30c5ecae0c3",
"text": "Games have been an important tool for motivating undergraduate students majoring in computer science and engineering. However, it is difficult to build an entire game for education from scratch, because the task requires high-level programming skills and expertise to understand the graphics and physics. Recently, there have been many different game artificial intelligence (AI) competitions, ranging from board games to the state-of-the-art video games (car racing, mobile games, first-person shooting games, real-time strategy games, and so on). The competitions have been designed such that participants develop their own AI module on top of public/commercial games. Because the materials are open to the public, it is quite useful to adopt them for an undergraduate course project. In this paper, we report our experiences using the Angry Birds AI Competition for such a project-based course. In the course, teams of students consider computer vision, strategic decision-making, resource management, and bug-free coding for their outcome. To promote understanding of game contents generation and extensive testing on the generalization abilities of the student's AI program, we developed software to help them create user-created levels. Students actively participated in the project and the final outcome was comparable with that of successful entries in the 2013 International Angry Birds AI Competition. Furthermore, it leads to the development of a new parallelized Angry Birds AI Competition platform with undergraduate students aiming to use advanced optimization algorithms for their controllers.",
"title": ""
},
{
"docid": "2a76205b80c90ff9a4ca3ccb0434bb03",
"text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.",
"title": ""
},
{
"docid": "cffce89fbb97dc1d2eb31a060a335d3c",
"text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.",
"title": ""
},
{
"docid": "a28199159d7508a7ef57cd20adf084c2",
"text": "Brain-computer interfaces (BCIs) translate brain activity into signals controlling external devices. BCIs based on visual stimuli can maintain communication in severely paralyzed patients, but only if intact vision is available. Debilitating neurological disorders however, may lead to loss of intact vision. The current study explores the feasibility of an auditory BCI. Sixteen healthy volunteers participated in three training sessions consisting of 30 2-3 min runs in which they learned to increase or decrease the amplitude of sensorimotor rhythms (SMR) of the EEG. Half of the participants were presented with visual and half with auditory feedback. Mood and motivation were assessed prior to each session. Although BCI performance in the visual feedback group was superior to the auditory feedback group there was no difference in performance at the end of the third session. Participants in the auditory feedback group learned slower, but four out of eight reached an accuracy of over 70% correct in the last session comparable to the visual feedback group. Decreasing performance of some participants in the visual feedback group is related to mood and motivation. We conclude that with sufficient training time an auditory BCI may be as efficient as a visual BCI. Mood and motivation play a role in learning to use a BCI.",
"title": ""
},
{
"docid": "e097d29240e7b3a83ad437b5fb7014f1",
"text": "We contribute an approach for interactive policy learning through expert demonstration that allows an agent to actively request and effectively represent demonstration examples. In order to address the inherent uncertainty of human demonstration, we represent the policy as a set of Gaussian mixture models (GMMs), where each model, with multiple Gaussian components, corresponds to a single action. Incrementally received demonstration examples are used as training data for the GMM set. We then introduce our confident execution approach, which focuses learning on relevant parts of the domain by enabling the agent to identify the need for and request demonstrations for specific parts of the state space. The agent selects between demonstration and autonomous execution based on statistical analysis of the uncertainty of the learned Gaussian mixture set. As it achieves proficiency at its task and gains confidence in its actions, the agent operates with increasing autonomy, eliminating the need for unnecessary demonstrations of already acquired behavior, and reducing both the training time and the demonstration workload of the expert. We validate our approach with experiments in simulated and real robot domains.",
"title": ""
},
{
"docid": "711d8291683bd23e2060b56ce7120f23",
"text": "Solving simple arithmetic word problems is one of the challenges in Natural Language Understanding. This paper presents a novel method to learn to use formulas to solve simple arithmetic word problems. Our system, analyzes each of the sentences to identify the variables and their attributes; and automatically maps this information into a higher level representation. It then uses that representation to recognize the presence of a formula along with its associated variables. An equation is then generated from the formal description of the formula. In the training phase, it learns to score the <formula, variables> pair from the systematically generated higher level representation. It is able to solve 86.07% of the problems in a corpus of standard primary school test questions and beats the state-of-the-art by",
"title": ""
},
{
"docid": "88a8f162017f80c17be58faad16a6539",
"text": "Instruction List (IL) is a simple typed assembly language commonly used in embedded control. There is little tool support for IL and, although defined in the IEC 61131-3 standard, there is no formal semantics. In this work we develop a formal operational semantics. Moreover, we present an abstract semantics, which allows approximative program simulation for a (possibly infinte) set of inputs in one simulation run. We also extended this framework to an abstract interpretation based analysis, which is implemented in our tool Homer. All these analyses can be carried out without knowledge of formal methods, which is typically not present in the IL community.",
"title": ""
},
{
"docid": "e096a7cefed3409e0e4a53aa9fc1f382",
"text": "We consider the problem of clustering incomplete data drawn from a union of subspaces. Classical subspace clustering methods are not applicable to this problem because the data are incomplete, while classical low-rank matrix completion methods may not be applicable because data in multiple subspaces may not be low rank. This paper proposes and evaluates two new approaches for subspace clustering and completion. The first one generalizes the sparse subspace clustering algorithm so that it can obtain a sparse representation of the data using only the observed entries. The second one estimates a suitable kernel matrix by assuming a random model for the missing entries and obtains the sparse representation from this kernel. Experiments on synthetic and real data show the advantages and disadvantages of the proposed methods, which all outperform the natural approach (low-rank matrix completion followed by sparse subspace clustering) when the data matrix is high-rank or the percentage of missing entries is large.",
"title": ""
},
{
"docid": "df0ef093e337d76f4902671065ce4fbc",
"text": "Refactoring, the activity of changing source code design without affecting its external behavior, is a widely used practice among developers, since it is considered to positively affect the quality of software systems. However, there are some \"human factors\" to be considered while performing refactoring, including developers knowledge of systems architecture. Recent studies showed how much \"people\" metrics, such as code ownership, might affect software quality as well. In this preliminary study we investigated the relationship between code ownership and refactoring activity performed by developers. This study can provide useful insights on who performs refactoring and help team leaders to properly manage human resources during software development.",
"title": ""
},
{
"docid": "bd7581bbb11e45685ccf44af8328c1dd",
"text": "The Full Bridge converter topology modulated in phase shift is one of the most popular converters used to obtain high efficiency conversion, especially in high power and high voltage applications. This converter topology combines the simplicity of fixed frequency modulations with the soft switching characteristic of resonant converters but, if a diode rectifier is used as output stage, it suffers of severe overshoot voltage spikes and ringing across the rectifier. In this paper, a new regenerative active snubber is widely studied and developed to reduce this drawback. The proposed snubber is based on a combination of an active clamp with a buck converter used to discharge the snubber capacitor. The snubber gate signal is obtained by using those of the phase shift modulation.",
"title": ""
},
{
"docid": "f6a339de620c058332fa469b37f1ecdd",
"text": "Typical mobile robot structures (e.g. wheelchair or carlike, ...) do not have the required mobility for common applications such as displacement in a corridor, hospital, office, ... New structures based on the \"universal wheel\" (i.e. a wheel equipped with free rotating rollers) have been developed to increase mobility. However these structures have important drawbacks such as vertical vibration and limited load capacity. This paper presents a comparison of several types of universal wheel performances based on the three following criteria: load capacity, surmountable bumps and vertical vibration. It is hoped that this comparison will help the designer in the selection of the best suitable solution for his application.",
"title": ""
},
{
"docid": "443637fcc9f9efcf1026bb64aa0a9c97",
"text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.",
"title": ""
},
{
"docid": "388f398a1696cdaa98a6e028b691e342",
"text": "A theoretical model for analogue computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analogue computations via the timing of single spikes in networks of detailed compartmental neuron models. In this way, one arrives at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of excitatory synapses. A correspondinglayered architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs.",
"title": ""
},
{
"docid": "dc9cbb95282caf469a047b52cf2a51a6",
"text": "We propose using machine learning techniques to analyze the shape of living cells in phase-contrast microscopy images. Large scale studies of cell shape are needed to understand the response of cells to their environment. Manual analysis of thousands of microscopy images, however, is time-consuming and error-prone and necessitates automated tools. We show how a combination of shape-based and appearance-based features of fibroblast cells can be used to classify their morphological state, using the Adaboost algorithm. The classification accuracy of our method approaches the agreement between two expert observers. We also address the important issue of clutter mitigation by developing a machine learning approach to distinguish between clutter and cells in time-lapse microscopy image sequences.",
"title": ""
},
{
"docid": "9d6758fce6873afe682a310b70bab8b3",
"text": "In order to better accommodate the dramatically increasing demand for data caching and computing services, storage and computation capabilities should be endowed to some of the intermediate nodes within the network. In this paper, we design a novel virtualized heterogeneous networks framework aiming at enabling content caching and computing. With the virtualization of the whole system, the communication, computing and caching resources can be shared among all users associated with different virtual service providers. We formulate the virtual resource allocation strategy as a joint optimization problem, where the gains of not only virtualization but also caching and computing are taken into consideration in the proposed architecture. In addition, a distributed algorithm based on alternating direction method of multipliers is adopted to solve the formulated problem, in order to reduce the computational complexity and signaling overhead. Finally, extensive simulations are presented to show the effectiveness of the proposed scheme under different system parameters.",
"title": ""
},
{
"docid": "b6d3ac278fd39745caa0bb3658a2fab1",
"text": "Consider two data providers, each maintaining private records of different feature sets about common entities. They aim to learn a linear model jointly in a federated setting, namely, data is local and a shared model is trained from locally computed updates. In contrast with most work on distributed learning, in this scenario (i) data is split vertically, i.e. by features, (ii) only one data provider knows the target variable and (iii) entities are not linked across the data providers. Hence, to the challenge of private learning, we add the potentially negative consequences of mistakes in entity resolution. Our contribution is twofold. First, we describe a three-party end-to-end solution in two phases—privacy-preserving entity resolution and federated logistic regression over messages encrypted with an additively homomorphic scheme—, secure against a honest-but-curious adversary. The system allows learning without either exposing data in the clear or sharing which entities the data providers have in common. Our implementation is as accurate as a naive non-private solution that brings all data in one place, and scales to problems with millions of entities with hundreds of features. Second, we provide what is to our knowledge the first formal analysis of the impact of entity resolution’s mistakes on learning, with results on how optimal classifiers, empirical losses, margins and generalisation abilities are affected. Our results bring a clear and strong support for federated learning: under reasonable assumptions on the number and magnitude of entity resolution’s mistakes, it can be extremely beneficial to carry out federated learning in the setting where each peer’s data provides a significant uplift to the other. ∗All authors contributed equally. Richard Nock is jointly with the Australian National University & the University of Sydney. Giorgio Patrini is now at the University of Amsterdam. 1 ar X iv :1 71 1. 10 67 7v 1 [ cs .L G ] 2 9 N ov 2 01 7",
"title": ""
},
{
"docid": "036526b572707282a50bc218b72e5862",
"text": "Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to be close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much faster. Recently, many research works have developed efficient optimization methods to construct linear classifiers and applied them to some large-scale applications. In this paper, we give a comprehensive survey on the recent development of this active research area.",
"title": ""
},
{
"docid": "d7538c23aa43edce6cfde8f2125fd3bb",
"text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.",
"title": ""
},
{
"docid": "b93c2265b4420b62ffbbf1a5e6d773f8",
"text": "The objective of this paper is to instill in students motivation and interest for what they are studying a little bit further of the theory they learn in classroom. Sometimes students prefer more interactive classes and want to know why the material given in class is helpful in their career. Do we have a solution for this? Autonomous Robotic Vehicle (ARV) projects can be the solution to this problem. This type of project are interdisciplinary and offer students varieties of challenges, while involving different areas of interest like programming, design, assembling and testing. One of the best ARV project is the Micromouse. The Micromouse is a small robot that solves mazes. The basic idea is that the student makes his own Micromouse from scratch using knowledge acquired from different classes and the research done. This project also helps the student to develop teamwork skills and creativity to complete the different challenges and objectives that appear when building a Micromouse. The student learns the importance of working with students from other engineering concentrations, which allows him to experience how a career in engineer will be.",
"title": ""
},
{
"docid": "5654bea8e2fe999fe52ec7536edd0f52",
"text": "Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better meeting user expectations. Thus, automated approaches have been proposed in literature with the aim of reducing the effort required for analyzing feedback contained in user reviews via automatic classification/prioritization according to specific topics. In this paper, we introduce SURF (Summarizer of User Reviews Feedback), a novel approach to condense the enormous amount of information that developers of popular apps have to manage due to user feedback received on a daily basis. SURF relies on a conceptual model for capturing user needs useful for developers performing maintenance and evolution tasks. Then it uses sophisticated summarisation techniques for summarizing thousands of reviews and generating an interactive, structured and condensed agenda of recommended software changes. We performed an end-to-end evaluation of SURF on user reviews of 17 mobile apps (5 of them developed by Sony Mobile), involving 23 developers and researchers in total. Results demonstrate high accuracy of SURF in summarizing reviews and the usefulness of the recommended changes. In evaluating our approach we found that SURF helps developers in better understanding user needs, substantially reducing the time required by developers compared to manually analyzing user (change) requests and planning future software changes.",
"title": ""
}
] |
scidocsrr
|
55c57dfb6f70f798bc2bff0c025f17ed
|
Interference Reduction in Multi-Cell Massive MIMO Systems I: Large-Scale Fading Precoding and Decoding
|
[
{
"docid": "d14a60ee9a51e52ec00cf25729193568",
"text": "Time-Division Duplexing (TDD) allows to estimate the downlink channels for an arbitrarily large number of base station antennas from a finite number of orthogonal uplink pilot signals, by exploiting channel reciprocity. Based on this observation, a recently proposed \"Massive MIMO\" scheme was shown to achieve unprecedented spectral efficiency in realistic conditions of distance-dependent pathloss and channel coherence time and bandwidth. The main focus and contribution of this paper is an improved Network-MIMO TDD architecture achieving spectral efficiencies comparable with \"Massive MIMO\", with one order of magnitude fewer antennas per active user per cell (roughly, from 500 to 50 antennas). The proposed architecture is based on a family of Network-MIMO schemes defined by small clusters of cooperating base stations, zero-forcing multiuser MIMO precoding with suitable inter-cluster interference mitigation constraints, uplink pilot signals allocation and frequency reuse across cells. The key idea consists of partitioning the users into equivalence classes, optimizing the Network-MIMO scheme for each equivalence class, and letting a scheduler allocate the channel time-frequency dimensions to the different classes in order to maximize a suitable network utility function that captures a desired notion of fairness. This results in a mixed-mode Network-MIMO architecture, where different schemes, each of which is optimized for the served user equivalence class, are multiplexed in time-frequency. In order to carry out the performance analysis and the optimization of the proposed architecture in a systematic and computationally efficient way, we consider the large-system regime where the number of users, the number of antennas, and the channel coherence block length go to infinity with fixed ratios.",
"title": ""
}
] |
[
{
"docid": "22c9f931198f054e7994e7f1db89a194",
"text": "Learning a good distance metric plays a vital role in many multimedia retrieval and data mining tasks. For example, a typical content-based image retrieval (CBIR) system often relies on an effective distance metric to measure similarity between any two images. Conventional CBIR systems simply adopting Euclidean distance metric often fail to return satisfactory results mainly due to the well-known semantic gap challenge. In this article, we present a novel framework of Semi-Supervised Distance Metric Learning for learning effective distance metrics by exploring the historical relevance feedback log data of a CBIR system and utilizing unlabeled data when log data are limited and noisy. We formally formulate the learning problem into a convex optimization task and then present a new technique, named as “Laplacian Regularized Metric Learning” (LRML). Two efficient algorithms are then proposed to solve the LRML task. Further, we apply the proposed technique to two applications. One direct application is for Collaborative Image Retrieval (CIR), which aims to explore the CBIR log data for improving the retrieval performance of CBIR systems. The other application is for Collaborative Image Clustering (CIC), which aims to explore the CBIR log data for enhancing the clustering performance of image pattern clustering tasks. We conduct extensive evaluation to compare the proposed LRML method with a number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information. Encouraging results validate the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "877e7654a4e42ab270a96e87d32164fd",
"text": "The presence of gender stereotypes in many aspects of society is a well-known phenomenon. In this paper, we focus on studying such stereotypes and bias in Hindi movie industry (Bollywood). We analyze movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at inter-sentence and intrasentence level. Different features like occupation, introduction of cast in text, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. We derive a semantic graph and compute centrality of each character and observe similar bias there. We also show that such bias is not applicable for movie posters where females get equal importance even though their character has little or no impact on the movie plot. Furthermore, we explore the movie trailers to estimate on-screen time for males and females and also study the portrayal of emotions by gender in them. The silver lining is that our system was able to identify 30 movies over last 3 years where such stereotypes were broken.",
"title": ""
},
{
"docid": "1af7a41e5cac72ed9245b435c463b366",
"text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.",
"title": ""
},
{
"docid": "a72c9cd8bdf4aec0d265dd4a5fff2826",
"text": "We propose a robust quantization-based image watermarking scheme, called the gradient direction watermarking (GDWM), based on the uniform quantization of the direction of gradient vectors. In GDWM, the watermark bits are embedded by quantizing the angles of significant gradient vectors at multiple wavelet scales. The proposed scheme has the following advantages: 1) increased invisibility of the embedded watermark because the watermark is embedded in significant gradient vectors, 2) robustness to amplitude scaling attacks because the watermark is embedded in the angles of the gradient vectors, and 3) increased watermarking capacity as the scheme uses multiple-scale embedding. The gradient vector at a pixel is expressed in terms of the discrete wavelet transform (DWT) coefficients. To quantize the gradient direction, the DWT coefficients are modified based on the derived relationship between the changes in the coefficients and the change in the gradient direction. Experimental results show that the proposed GDWM outperforms other watermarking methods and is robust to a wide range of attacks, e.g., Gaussian filtering, amplitude scaling, median filtering, sharpening, JPEG compression, Gaussian noise, salt & pepper noise, and scaling.",
"title": ""
},
{
"docid": "2ec6cb6ae25384cacc7bd8213002a58b",
"text": "Food packaging has evolved from simply a container to hold food to something today that can play an active role in food quality. Many packages are still simply containers, but they have properties that have been developed to protect the food. These include barriers to oxygen, moisture, and flavors. Active packaging, or that which plays an active role in food quality, includes some microwave packaging as well as packaging that has absorbers built in to remove oxygen from the atmosphere surrounding the product or to provide antimicrobials to the surface of the food. Packaging has allowed access to many foods year-round that otherwise could not be preserved. It is interesting to note that some packages have actually allowed the creation of new categories in the supermarket. Examples include microwave popcorn and fresh-cut produce, which owe their existence to the unique packaging that has been developed.",
"title": ""
},
{
"docid": "0c2a2cb741d1d22c5ef3eabd0b525d8d",
"text": "Part-of-speech (POS) tagging is a process of assigning the words in a text corresponding to a particular part of speech. A fundamental version of POS tagging is the identification of words as nouns, verbs, adjectives etc. For processing natural languages, Part of Speech tagging is a prominent tool. It is one of the simplest as well as most constant and statistical model for many NLP applications. POS Tagging is an initial stage of linguistics, text analysis like information retrieval, machine translator, text to speech synthesis, information extraction etc. In POS Tagging we assign a Part of Speech tag to each word in a sentence and literature. Various approaches have been proposed to implement POS taggers. In this paper we present a Marathi part of speech tagger. It is morphologically rich language. Marathi is spoken by the native people of Maharashtra. The general approach used for development of tagger is statistical using Unigram, Bigram, Trigram and HMM Methods. It presents a clear idea about all the algorithms with suitable examples. It also introduces a tag set for Marathi which can be used for tagging Marathi text. In this paper we have shown the development of the tagger as well as compared to check the accuracy of taggers output. The three Marathi POS taggers viz. Unigram, Bigram, Trigram and HMM gives the accuracy of 77.38%, 90.30%, 91.46% and 93.82% respectively.",
"title": ""
},
{
"docid": "4284e9bbe3bf4c50f9e37455f1118e6b",
"text": "A longevity revolution (Butler, 2008) is occurring across the globe. Because of factors ranging from the reduction of early-age mortality to an increase in life expectancy at later ages, most of the world’s population is now living longer than preceding generations (Bengtson, 2014). There are currently more than 44 million older adults—typically defined as persons 65 years and older—living in the United States, and this number is expected to increase to 98 million by 2060 (Administration on Aging, 2016). Although most older adults report higher levels of life satisfaction than do younger or middle-aged adults (George, 2010), between 5.6 and 8 million older Americans have a diagnosable mental health or substance use disorder (Bartels & Naslund, 2013). Furthermore, because of the rapid growth of the older adult population, this figure is expected to nearly double by 2030 (Bartels & Naslund, 2013). Mental health care is effective for older adults, and evidence-based treatments exist to address a broad range of issues, including anxiety disorders, depression, sleep disturbances, substance abuse, and some symptoms of dementia (Myers & Harper, 2004). Counseling interventions may also be beneficial for nonclinical life transitions, such as coping with loss, adjusting to retirement and a reduced income, and becoming a grandparent (Myers & Harper, 2004). Yet, older adults are underserved when it comes to mental",
"title": ""
},
{
"docid": "987f221f99b1638bb5bf0542dbc98c3f",
"text": "Pain, whether caused by physical injury or social rejection, is an inevitable part of life. These two types of pain-physical and social-may rely on some of the same behavioral and neural mechanisms that register pain-related affect. To the extent that these pain processes overlap, acetaminophen, a physical pain suppressant that acts through central (rather than peripheral) neural mechanisms, may also reduce behavioral and neural responses to social rejection. In two experiments, participants took acetaminophen or placebo daily for 3 weeks. Doses of acetaminophen reduced reports of social pain on a daily basis (Experiment 1). We used functional magnetic resonance imaging to measure participants' brain activity (Experiment 2), and found that acetaminophen reduced neural responses to social rejection in brain regions previously associated with distress caused by social pain and the affective component of physical pain (dorsal anterior cingulate cortex, anterior insula). Thus, acetaminophen reduces behavioral and neural responses associated with the pain of social rejection, demonstrating substantial overlap between social and physical pain.",
"title": ""
},
{
"docid": "70374e96446dcc65a0f5fa64e439a472",
"text": "Electric Vehicles (EVs) are projected as the most sustainable solutions for future transportation. EVs have many advantages over conventional hydrocarbon internal combustion engines including energy efficiency, environmental friendliness, noiselessness and less dependence on fossil fuels. However, there are also many challenges which are mainly related to the battery pack, such as battery cost, driving range, reliability, safety, battery capacity, cycle life, and recharge time. The performance of EVs is greatly dependent on the battery pack. Temperatures of the cells in a battery pack need to be maintained within its optimum operating temperature range in order to achieve maximum performance, safety and reliability under various operating conditions. Poor thermal management will affect the charging and discharging power, cycle life, cell balancing, capacity and fast charging capability of the battery pack. Hence, a thermal management system is needed in order to enhance the performance and to extend the life cycle of the battery pack. In this study, the effects of temperature on the Li-ion battery are investigated. Heat generated by LiFePO4 pouch cell was characterized using an EV accelerating rate calorimeter. Computational fluid dynamic analyses were carried out to investigate the performance of a liquid cooling system for a battery pack. The numerical simulations showed promising results and the design of the battery pack thermal management system was sufficient to ensure that the cells operated within their temperature limits.",
"title": ""
},
{
"docid": "f372bc2ed27f5d4c08087ddc46e5373e",
"text": "This work investigates the practice of credit scoring and introduces the use of the clustered support vector machine (CSVM) for credit scorecard development. This recently designed algorithm addresses some of the limitations noted in the literature that is associated with traditional nonlinear support vector machine (SVM) based methods for classification. Specifically, it is well known that as historical credit scoring datasets get large, these nonlinear approaches while highly accurate become computationally expensive. Accordingly, this study compares the CSVM with other nonlinear SVM based techniques and shows that the CSVM can achieve comparable levels of classification performance while remaining relatively cheap computationally. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "73a62915c29942d2fac0570cac7eb3e0",
"text": "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.",
"title": ""
},
{
"docid": "3f679dbd9047040d63da70fc9e977a99",
"text": "In this paper we consider videos (e.g. Hollywood movies) and their accompanying natural language descriptions in the form of narrative sentences (e.g. movie scripts without timestamps). We propose a method for temporally aligning the video frames with the sentences using both visual and textual information, which provides automatic timestamps for each narrative sentence. We compute the similarity between both types of information using vectorial descriptors and propose to cast this alignment task as a matching problem that we solve via dynamic programming. Our approach is simple to implement, highly efficient and does not require the presence of frequent dialogues, subtitles, and character face recognition. Experiments on various movies demonstrate that our method can successfully align the movie script sentences with the video frames of movies.",
"title": ""
},
{
"docid": "ce1c2217536fe62ea0f17167415b581c",
"text": "Generative Adversarial Networks (GANs) have shown great capacity on image generation, in which a discriminative model guides the training of a generative model to construct images that resemble real images. Recently, GANs have been extended from generating images to generating sequences (e.g., poems, music and codes). Existing GANs on sequence generation mainly focus on general sequences, which are grammar-free. In many real-world applications, however, we need to generate sequences in a formal language with the constraint of its corresponding grammar. For example, to test the performance of a database, one may want to generate a collection of SQL queries, which are not only similar to the queries of real users, but also follow the SQL syntax of the target database. Generating such sequences is highly challenging because both the generator and discriminator of GANs need to consider the structure of the sequences and the given grammar in the formal language. To address these issues, we study the problem of syntax-aware sequence generation with GANs, in which a collection of real sequences and a set of pre-defined grammatical rules are given to both discriminator and generator. We propose a novel GAN framework, namely TreeGAN, to incorporate a given Context-Free Grammar (CFG) into the sequence generation process. In TreeGAN, the generator employs a recurrent neural network (RNN) to construct a parse tree. Each generated parse tree can then be translated to a valid sequence of the given grammar. The discriminator uses a tree-structured RNN to distinguish the generated trees from real trees. We show that TreeGAN can generate sequences for any CFG and its generation fully conforms with the given syntax. Experiments on synthetic and real data sets demonstrated that TreeGAN significantly improves the quality of the sequence generation in context-free languages.",
"title": ""
},
{
"docid": "d771693809e966adc3656f58855fdda0",
"text": "A wide variety of crystalline nanowires (NWs) with outstanding mechanical properties have recently emerged. Measuring their mechanical properties and understanding their deformation mechanisms are of important relevance to many of their device applications. On the other hand, such crystalline NWs can provide an unprecedented platform for probing mechanics at the nanoscale. While challenging, the field of experimental mechanics of crystalline nanowires has emerged and seen exciting progress in the past decade. This review summarizes recent advances in this field, focusing on major experimental methods using atomic force microscope (AFM) and electron microscopes and key results on mechanics of crystalline nanowires learned from such experimental studies. Advances in several selected topics are discussed including elasticity, fracture, plasticity, and anelasticity. Finally, this review surveys some applications of crystalline nanowires such as flexible and stretchable electronics, nanocomposites, nanoelectromechanical systems (NEMS), energy harvesting and storage, and strain engineering, where mechanics plays a key role. [DOI: 10.1115/1.4035511]",
"title": ""
},
{
"docid": "368670b67f79d404d10b9226b860eeb5",
"text": "Parkinson disease (PD) is a complex neurodegenerative disorder with both motor and nonmotor symptoms owing to a spreading process of neuronal loss in the brain. At present, only symptomatic treatment exists and nothing can be done to halt the degenerative process, as its cause remains unclear. Risk factors such as aging, genetic susceptibility, and environmental factors all play a role in the onset of the pathogenic process but how these interlink to cause neuronal loss is not known. There have been major advances in the understanding of mechanisms that contribute to nigral dopaminergic cell death, including mitochondrial dysfunction, oxidative stress, altered protein handling, and inflammation. However, it is not known if the same processes are responsible for neuronal loss in nondopaminergic brain regions. Many of the known mechanisms of cell death are mirrored in toxin-based models of PD, but neuronal loss is rapid and not progressive and limited to dopaminergic cells, and drugs that protect against toxin-induced cell death have not translated into neuroprotective therapies in humans. Gene mutations identified in rare familial forms of PD encode proteins whose functions overlap widely with the known molecular pathways in sporadic disease and these have again expanded our knowledge of the neurodegenerative process but again have so far failed to yield effective models of sporadic disease when translated into animals. We seem to be missing some key parts of the jigsaw, the trigger event starting many years earlier in the disease process, and what we are looking at now is merely part of a downstream process that is the end stage of neuronal death.",
"title": ""
},
{
"docid": "c8f39a710ca3362a4d892879f371b318",
"text": "While sentiment and emotion analysis has received a considerable amount of research attention, the notion of understanding and detecting the intensity of emotions is relatively less explored. This paper describes a system developed for predicting emotion intensity in tweets. Given a Twitter message, CrystalFeel uses features derived from parts-of-speech, ngrams, word embedding, and multiple affective lexicons including Opinion Lexicon, SentiStrength, AFFIN, NRC Emotion & Hash Emotion, and our in-house developed EI Lexicons to predict the degree of the intensity associated with fear, anger, sadness, and joy in the tweet. We found that including the affective lexicons-based features allowed the system to obtain strong prediction performance, while revealing interesting emotion word-level and message-level associations. On gold test data, CrystalFeel obtained Pearson correlations of .717 on average emotion intensity and of .816 on sentiment intensity.",
"title": ""
},
{
"docid": "25c2bab5bd1d541629c23bb6a929f968",
"text": "A novel transition from coaxial cable to microstrip is presented in which the coax connector is perpendicular to the substrate of the printed circuit. Such a right-angle transition has practical advantages over more common end-launch geometries in some situations. The design is compact, easy to fabricate, and provides repeatable performance of better than 14 dB return loss and 0.4 dB insertion loss from DC to 40 GHz.",
"title": ""
},
{
"docid": "b88ceafe9998671820291773be77cabc",
"text": "The aim of this study was to propose a set of network methods to measure the specific properties of a team. These metrics were organised at macro-analysis levels. The interactions between teammates were collected and then processed following the analysis levels herein announced. Overall, 577 offensive plays were analysed from five matches. The network density showed an ambiguous relationship among the team, mainly during the 2nd half. The mean values of density for all matches were 0.48 in the 1st half, 0.32 in the 2nd half and 0.34 for the whole match. The heterogeneity coefficient for the overall matches rounded to 0.47 and it was also observed that this increased in all matches in the 2nd half. The centralisation values showed that there was no 'star topology'. The results suggest that each node (i.e., each player) had nearly the same connectivity, mainly in the 1st half. Nevertheless, the values increased in the 2nd half, showing a decreasing participation of all players at the same level. Briefly, these metrics showed that it is possible to identify how players connect with each other and the kind and strength of the connections between them. In summary, it may be concluded that network metrics can be a powerful tool to help coaches understand team's specific properties and support decision-making to improve the sports training process based on match analysis.",
"title": ""
},
{
"docid": "cc08e377d924f86fb6ceace022ad8db2",
"text": "Homomorphic cryptography has been one of the most interesting topics of mathematics and computer security since Gentry presented the first construction of a fully homomorphic encryption (FHE) scheme in 2009. Since then, a number of different schemes have been found, that follow the approach of bootstrapping a fully homomorphic scheme from a somewhat homomorphic foundation. All existing implementations of these systems clearly proved, that fully homomorphic encryption is not yet practical, due to significant performance limitations. However, there are many applications in the area of secure methods for cloud computing, distributed computing and delegation of computation in general, that can be implemented with homomorphic encryption schemes of limited depth. We discuss a simple algebraically homomorphic scheme over the integers that is based on the factorization of an approximate semiprime integer. We analyze the properties of the scheme and provide a couple of known protocols that can be implemented with it. We also provide a detailed discussion on searching with encrypted search terms and present implementations and performance figures for the solutions discussed in this paper.",
"title": ""
}
] |
scidocsrr
|
05a1475b53fac94da31c5be1309f4285
|
Machine Learning for Dialog State Tracking: A Review
|
[
{
"docid": "771b1e44b26f749f6ecd9fe515159d9c",
"text": "In spoken dialog systems, dialog state tracking refers to the task of correctly inferring the user's goal at a given turn, given all of the dialog history up to that turn. This task is challenging because of speech recognition and language understanding errors, yet good dialog state tracking is crucial to the performance of spoken dialog systems. This paper presents results from the third Dialog State Tracking Challenge, a research community challenge task based on a corpus of annotated logs of human-computer dialogs, with a blind test set evaluation. The main new feature of this challenge is that it studied the ability of trackers to generalize to new entities - i.e. new slots and values not present in the training data. This challenge received 28 entries from 7 research teams. About half the teams substantially exceeded the performance of a competitive rule-based baseline, illustrating not only the merits of statistical methods for dialog state tracking but also the difficulty of the problem.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "3d22f5be70237ae0ee1a0a1b52330bfa",
"text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.",
"title": ""
}
] |
[
{
"docid": "56525ce9536c3c8ea03ab6852b854e95",
"text": "The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture.",
"title": ""
},
{
"docid": "2545ce26d8727fc9d0e855ebff1a7171",
"text": "The quality and speed of most texture synthesis algorithms depend on a 2D input sample that is small and contains enough texture variations. However, little research exists on how to acquire such sample. For homogeneous patterns this can be achieved via manual cropping, but no adequate solution exists for inhomogeneous or globally varying textures, i.e. patterns that are local but not stationary, such as rusting over an iron statue with appearance conditioned on varying moisture levels.\n We present inverse texture synthesis to address this issue. Our inverse synthesis runs in the opposite direction with respect to traditional forward synthesis: given a large globally varying texture, our algorithm automatically produces a small texture compaction that best summarizes the original. This small compaction can be used to reconstruct the original texture or to re-synthesize new textures under user-supplied controls. More important, our technique allows real-time synthesis of globally varying textures on a GPU, where the texture memory is usually too small for large textures. We propose an optimization framework for inverse texture synthesis, ensuring that each input region is properly encoded in the output compaction. Our optimization process also automatically computes orientation fields for anisotropic textures containing both low- and high-frequency regions, a situation difficult to handle via existing techniques.",
"title": ""
},
{
"docid": "30059bf751594a9b913057cabb69ca00",
"text": "This paper proposes a new algorithm for automatic crack detection from 2D pavement images. It strongly relies on the localization of minimal paths within each image, a path being a series of neighboring pixels and its score being the sum of their intensities. The originality of the approach stems from the proposed way to select a set of minimal paths and the two postprocessing steps introduced to improve the quality of the detection. Such an approach is a natural way to take account of both the photometric and geometric characteristics of pavement images. An intensive validation is performed on both synthetic and real images (from five different acquisition systems), with comparisons to five existing methods. The proposed algorithm provides very robust and precise results in a wide range of situations, in a fully unsupervised manner, which is beyond the current state of the art.",
"title": ""
},
{
"docid": "0fa223f3e555cbea206640de7f699cf8",
"text": "Transforming unstructured text into structured form is important for fashion e-commerce platforms that ingest tens of thousands of fashion products every day. While most of the e-commerce product extraction research focuses on extracting a single product from the product title using known keywords, little attention has been paid to discovering potentially multiple products present in the listing along with their respective relevant attributes, and leveraging the entire title and description text for this purpose. We fill this gap and propose a novel composition of sequence labeling and multi-task learning as an end-to-end trainable deep neural architecture. We systematically evaluate our approach on one of the largest tagged datasets in fashion e-commerce consisting of 25K listings labeled at word-level. Given 23 labels, we discover label-values with F1 score of 92.2%. When applied to 2M listings, we discovered 2.6M fashion items and 9.5M attribute values.",
"title": ""
},
{
"docid": "999c7d8d16817d4b991e5b794be3b074",
"text": "Smile detection from facial images is a specialized task in facial expression analysis with many potential applications such as smiling payment, patient monitoring and photo selection. The current methods on this study are to represent face with low-level features, followed by a strong classifier. However, these manual features cannot well discover information implied in facial images for smile detection. In this paper, we propose to extract high-level features by a well-designed deep convolutional networks (CNN). A key contribution of this work is that we use both recognition and verification signals as supervision to learn expression features, which is helpful to reduce same-expression variations and enlarge different-expression differences. Our method is end-to-end, without complex pre-processing often used in traditional methods. High-level features are taken from the last hidden layer neuron activations of deep CNN, and fed into a soft-max classifier to estimate. Experimental results show that our proposed method is very effective, which outperforms the state-of-the-art methods. On the GENKI smile detection dataset, our method reduces the error rate by 21% compared with the previous best method.",
"title": ""
},
{
"docid": "51b201422fdf2a9666070abadc6849cf",
"text": "Losing a parent prior to age 18 years can have life-long implications. The challenges of emerging adulthood may be even more difficult for parentally bereaved college students, and studying their coping responses is crucial for designing campus services and therapy interventions. This study examined the relationships between bereavement-related distress, experiential avoidance (EA), values, and resilience. Findings indicated that EA and low importance of values were correlated with bereavement difficulties, with EA accounting for 26% of the variance in the bereavement distress measure. In addition, reports of behaving consistently with values accounted for 20% of the variance in the resiliency measure. Contrary to hypotheses and previous literature, there were no significant relationships between the measures of EA and values. The results, limitations, and directions for future research are discussed.",
"title": ""
},
{
"docid": "1aa3d2456e34c8ab59a340fd32825703",
"text": "It is well known that guided soft tissue healing with a provisional restoration is essential to obtain optimal anterior esthetics in the implant prosthesis. What is not well known is how to transfer a record of beautiful anatomically healed tissue to the laboratory. With the advent of emergence profile healing abutments and corresponding impression copings, there has been a dramatic improvement over the original 4.0-mm diameter design. This is a great improvement, however, it still does not accurately transfer a record of anatomically healed tissue, which is often triangularly shaped, to the laboratory, because the impression coping is a round cylinder. This article explains how to fabricate a \"custom impression coping\" that is an exact record of anatomically healed tissue for accurate duplication. This technique is significant because it allows an even closer replication of the natural dentition.",
"title": ""
},
{
"docid": "f60f75d03c06842efcb2454536ec8226",
"text": "The Internet of Things (IoT) relies on physical objects interconnected between each others, creating a mesh of devices producing information. In this context, sensors are surrounding our environment (e.g., cars, buildings, smartphones) and continuously collect data about our living environment. Thus, the IoT is a prototypical example of Big Data. The contribution of this paper is to define a software architecture supporting the collection of sensor-based data in the context of the IoT. The architecture goes from the physical dimension of sensors to the storage of data in a cloud-based system. It supports Big Data research effort as its instantiation supports a user while collecting data from the IoT for experimental or production purposes. The results are instantiated and validated on a project named SMARTCAMPUS, which aims to equip the SophiaTech campus with sensors to build innovative applications that supports end-users.",
"title": ""
},
{
"docid": "8c4e02333f466c074ad332d904f655b9",
"text": "Context. The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20 century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives. In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods. A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results. Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions. With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.",
"title": ""
},
{
"docid": "12b8dac3e97181eb8ca9c0406f2fa456",
"text": "INTRODUCTION\nThis paper discusses some of the issues and challenges of implementing appropriate and coordinated District Health Management Information System (DHMIS) in environments dependent on external support especially when insufficient attention has been given to the sustainability of systems. It also discusses fundamental issues which affect the usability of DHMIS to support District Health System (DHS), including meeting user needs and user education in the use of information for management; and the need for integration of data from all health-providing and related organizations in the district.\n\n\nMETHODS\nThis descriptive cross-sectional study was carried out in three DHSs in Kenya. Data was collected through use of questionnaires, focus group discussions and review of relevant literature, reports and operational manuals of the studied DHMISs.\n\n\nRESULTS\nKey personnel at the DHS level were not involved in the development and implementation of the established systems. The DHMISs were fragmented to the extent that their information products were bypassing the very levels they were created to serve. None of the DHMISs was computerized. Key resources for DHMIS operation were inadequate. The adequacy of personnel was 47%, working space 40%, storage space 34%, stationery 20%, 73% of DHMIS staff were not trained, management support was 13%. Information produced was 30% accurate, 19% complete, 26% timely, 72% relevant; the level of confidentiality and use of information at the point of collection stood at 32% and 22% respectively and information security at 48%. Basic DHMIS equipment for information processing was not available. This inhibited effective and efficient provision of information services.\n\n\nCONCLUSIONS\nAn effective DHMIS is essential for DHS planning, implementation, monitoring and evaluation activities. Without accurate, timely, relevant and complete information the existing information systems are not capable of facilitating the DHS managers in their day-today operational management. The existing DHMISs were found not supportive of the DHS managers' strategic and operational management functions. Consequently DHMISs were found to be plagued by numerous designs, operational, resources and managerial problems. There is an urgent need to explore the possibilities of computerizing the existing manual systems to take advantage of the potential uses of microcomputers for DHMIS operations within the DHS. Information system designers must also address issues of cooperative partnership in information activities, systems compatibility and sustainability.",
"title": ""
},
{
"docid": "37af8daa32affcdedb0b4820651a0b62",
"text": "Bag of words (BoW) model, which was originally used for document processing field, has been introduced to computer vision field recently and used in object recognition successfully. However, in face recognition, the order less collection of local patches in BoW model cannot provide strong distinctive information since the objects (face images) belong to the same category. A new framework for extracting facial features based on BoW model is proposed in this paper, which can maintain holistic spatial information. Experimental results show that the improved method can obtain better face recognition performance on face images of AR database with extreme expressions, variant illuminations, and partial occlusions.",
"title": ""
},
{
"docid": "20c57c17bd2db03d017b0f3fa8e2eb23",
"text": "Recent research shows that the i-vector framework for speaker recognition can significantly benefit from phonetic information. A common approach is to use a deep neural network (DNN) trained for automatic speech recognition to generate a universal background model (UBM). Studies in this area have been done in relatively clean conditions. However, strong background noise is known to severely reduce speaker recognition performance. This study investigates a phonetically-aware i-vector system in noisy conditions. We propose a front-end to tackle the noise problem by performing speech separation and examine its performance for both verification and identification tasks. The proposed separation system trains a DNN to estimate the ideal ratio mask of the noisy speech. The separated speech is then used to extract enhanced features for the i-vector framework. We compare the proposed system against a multi-condition trained baseline and a traditional GMM-UBM i-vector system. Our proposed system provides an absolute average improvement of 8% in identification accuracy and 1.2% in equal error rate.",
"title": ""
},
{
"docid": "53595cdb8e7a9e8ee2debf4e0dda6d45",
"text": "Botnets have become one of the major attacks in the internet today due to their illicit profitable financial gain. Meanwhile, honeypots have been successfully deployed in many computer security defence systems. Since honeypots set up by security defenders can attract botnet compromises and become spies in exposing botnet membership and botnet attacker behaviours, they are widely used by security defenders in botnet defence. Therefore, attackers constructing and maintaining botnets will be forced to find ways to avoid honeypot traps. In this paper, we present a hardware and software independent honeypot detection methodology based on the following assumption: security professionals deploying honeypots have a liability constraint such that they cannot allow their honeypots to participate in real attacks that could cause damage to others, while attackers do not need to follow this constraint. Attackers could detect honeypots in their botnets by checking whether compromised machines in a botnet can successfully send out unmodified malicious traffic. Based on this basic detection principle, we present honeypot detection techniques to be used in both centralised botnets and Peer-to-Peer (P2P) structured botnets. Experiments show that current standard honeypots and honeynet programs are vulnerable to the proposed honeypot detection techniques. At the end, we discuss some guidelines for defending against general honeypot-aware attacks.",
"title": ""
},
{
"docid": "e81a1fd47bd1ec7f4ffbd646f9873836",
"text": "Due to the increasing complexity of the processor architecture and the time-consuming software simulation, efficient design space exploration (DSE) has become a critical challenge in processor design. To address this challenge, recently machine learning techniques have been widely explored for predicting the performance of various configurations through conducting only a small number of simulations as the training samples. However, most existing methods randomly select some samples for simulation from the entire configuration space as training samples to build program-specific predictors. When a new program is considered, a large number of new program-specific simulations are needed for building a new predictor. Thus considerable simulation cost is required for each program. In this paper, we propose an efficient cross-program DSE framework TrEE by combining a flexible statistical sampling strategy and ensemble transfer learning technique. Specifically, TrEE includes the following two phases which also form our major contributions: 1) proposing an orthogonal array based foldover design for flexibly sampling the representative configurations for simulation, and 2) proposing an ensemble transfer learning algorithm that can effectively transfer knowledge among different types of programs for improving the prediction performance for the new program. We evaluate the proposed TrEE on the benchmarks of SPEC CPU 2006 suite. The results demonstrate that TrEE is much more efficient and robust than state-of-art DSE techniques.",
"title": ""
},
{
"docid": "205ed1eba187918ac6b4a98da863a6f2",
"text": "Since the first papers on asymptotic waveform evaluation (AWE), Pade-based reduced order models have become standard for improving coupled circuit-interconnect simulation efficiency. Such models can be accurately computed using bi-orthogonalization algorithms like Pade via Lanczos (PVL), but the resulting Pade approximates can still be unstable even when generated from stable RLC circuits. For certain classes of RC circuits it has been shown that congruence transforms, like the Arnoldi algorithm, can generate guaranteed stable and passive reduced-order models. In this paper we present a computationally efficient model-order reduction technique, the coordinate-transformed Arnoldi algorithm, and show that this method generates arbitrarily accurate and guaranteed stable reduced-order models for RLC circuits. Examples are presented which demonstrates the enhanced stability and efficiency of the new method.",
"title": ""
},
{
"docid": "49568236b0e221053c32b73b896d3dde",
"text": "The continuous growth in the size and use of the Internet is creating difficulties in the search for information. A sophisticated method to organize the layout of the information and assist user navigation is therefore particularly important. In this paper, we evaluate the feasibility of using a self-organizing map (SOM) to mine web log data and provide a visual tool to assist user navigation. We have developed LOGSOM, a system that utilizes Kohonen’s self-organizing map to organize web pages into a two-dimensional map. The organization of the web pages is based solely on the users’ navigation behavior, rather than the content of the web pages. The resulting map not only provides a meaningful navigation tool (for web users) that is easily incorporated with web browsers, but also serves as a visual analysis tool for webmasters to better understand the characteristics and navigation behaviors of web users visiting their pages. D 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "11ae42bedc18dedd0c29004000a4ec00",
"text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.",
"title": ""
},
{
"docid": "a903f9eb225a79ebe963d1905af6d3c8",
"text": "We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a \"bag,\" in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores.\n Since PBFS employs a nonconstant-time \"reducer\" -- \"hyperobject\" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P << (V+E)/Dlg3(V/D).",
"title": ""
},
{
"docid": "5f8a8117ff153528518713d66c876228",
"text": "Certain human talents, such as musical ability, have been associated with left-right differences in brain structure and function. In vivo magnetic resonance morphometry of the brain in musicians was used to measure the anatomical asymmetry of the planum temporale, a brain area containing auditory association cortex and previously shown to be a marker of structural and functional asymmetry. Musicians with perfect pitch revealed stronger leftward planum temporale asymmetry than nonmusicians or musicians without perfect pitch. The results indicate that outstanding musical ability is associated with increased leftward asymmetry of cortex subserving music-related functions.",
"title": ""
},
{
"docid": "8f876cfb665a4a6a0fc08c8d28584a14",
"text": "Personalisation is an important area in the field of IR that attempts to adapt ranking algorithms so that the results returned are tuned towards the searcher's interests. In this work we use query logs to build personalised ranking models in which user profiles are constructed based on the representation of clicked documents over a topic space. Instead of employing a human-generated ontology, we use novel latent topic models to determine these topics. Our experiments show that by subtly introducing user profiles as part of the ranking algorithm, rather than by re-ranking an existing list, we can provide personalised ranked lists of documents which improve significantly over a non-personalised baseline. Further examination shows that the performance of the personalised system is particularly good in cases where prior knowledge of the search query is limited.",
"title": ""
}
] |
scidocsrr
|
6cb53c711ed03317425017894be7ea47
|
Industrie 4.0: Enabling technologies
|
[
{
"docid": "292fb39474de4ecaac282229fe9f050e",
"text": "The widespread proliferation of handheld devices enables mobile carriers to be connected at anytime and anywhere. Meanwhile, the mobility patterns of mobile devices strongly depend on the users' movements, which are closely related to their social relationships and behaviors. Consequently, today's mobile networks are becoming increasingly human centric. This leads to the emergence of a new field which we call socially aware networking (SAN). One of the major features of SAN is that social awareness becomes indispensable information for the design of networking solutions. This emerging paradigm is applicable to various types of networks (e.g., opportunistic networks, mobile social networks, delay-tolerant networks, ad hoc networks, etc.) where the users have social relationships and interactions. By exploiting social properties of nodes, SAN can provide better networking support to innovative applications and services. In addition, it facilitates the convergence of human society and cyber-physical systems. In this paper, for the first time, to the best of our knowledge, we present a survey of this emerging field. Basic concepts of SAN are introduced. We intend to generalize the widely used social properties in this regard. The state-of-the-art research on SAN is reviewed with focus on three aspects: routing and forwarding, incentive mechanisms, and data dissemination. Some important open issues with respect to mobile social sensing and learning, privacy, node selfishness, and scalability are discussed.",
"title": ""
}
] |
[
{
"docid": "7d1348ad0dbd8f33373e556009d4f83a",
"text": "Laryngeal neoplasms represent 2% of all human cancers. They befall mainly the male sex, especially between 50 and 70 years of age, but exceptionally may occur in infancy or extreme old age. Their occurrence has increased considerably inclusively due to progressive population again. The present work aims at establishing a relation between this infirmity and its prognosis in patients submitted to the treatment recommended by Departament of Otolaryngology and Head Neck Surgery of the School of Medicine of São José do Rio Preto. To this effect, by means of karyometric optical microscopy, cell nuclei in the glottic region of 20 individuals, divided into groups according to their tumor stage and time of survival, were evaluated. Following comparation with a control group and statistical analsis, it became possible to verify that the lesser diameter of nuclei is of prognostic value for initial tumors in this region.",
"title": ""
},
{
"docid": "c61c5831c282c4db3308345aace744d7",
"text": "The present study examined the associations among participant demographics, personality factors, love dimensions, and relationship length. In total, 16,030 participants completed an internet survey assessing Big Five personality factors, Sternberg's three love dimensions (intimacy, passion, and commitment), and the length of time that they had been involved in a relationship. Results of structural equation modeling (SEM) showed that participant age was negatively associated with passion and positively associated with intimacy and commitment. In addition, the Big Five factor of Agreeableness was positively associated with all three love dimensions, whereas Conscientiousness was positively associated with intimacy and commitment. Finally, passion was negatively associated with relationship length, whereas commitment was positively correlated with relationship length. SEM results further showed that there were minor differences in these associations for women and men. Given the large sample size, our results reflect stable associations between personality factors and love dimensions. The present results may have important implications for relationship and marital counseling. Limitations of this study and further implications are discussed.",
"title": ""
},
{
"docid": "121fc3a009e8ce2938f822ba437bdaa3",
"text": "Due to an increased awareness and significant environmental pressures from various stakeholders, companies have begun to realize the significance of incorporating green practices into their daily activities. This paper proposes a framework using Fuzzy TOPSIS to select green suppliers for a Brazilian electronics company; our framework is built on the criteria of green supply chain management (GSCM) practices. An empirical analysis is made, and the data are collected from a set of 12 available suppliers. We use a fuzzy TOPSIS approach to rank the suppliers, and the results of the proposed framework are compared with the ranks obtained by both the geometric mean and the graded mean methods of fuzzy TOPSIS methodology. Then a Spearman rank correlation coefficient is used to find the statistical difference between the ranks obtained by the three methods. Finally, a sensitivity analysis has been performed to examine the influence of the preferences given by the decision makers for the chosen GSCM practices on the selection of green suppliers. Results indicate that the four dominant criteria are Commitment of senior management to GSCM; Product designs that reduce, reuse, recycle, or reclaim materials, components, or energy; Compliance with legal environmental requirements and auditing programs; and Product designs that avoid or reduce toxic or hazardous material use. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b11df93138d95bdfb3d50b013d4ecccc",
"text": "LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs) for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem) or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.",
"title": ""
},
{
"docid": "8142bb9e734574f251fa548a817f7f52",
"text": "The chain of delay elements creating delay lines are the basic building blocks of delay locked loops (DLLs) applied in clock distribution network in many VLSI circuits and systems. In the paper Current Controlled delay line (CCDL) elements with Duty Cycle Correction (DCC) has been described and investigated. The architecture of these elements is based on Switched-Current Mirror Inverter (SCMI) and CMOS standard or Schmitt type inverters. The primary characteristics of the described CCDL element have been compared with characteristics of two most popular ones: current starved, and shunt capacitor delay elements. The simulation results with real foundry parameters models in 180 nm, 1.8 V CMOS technology from UMC are also included. Simulations have been done using BSIM3V3 device models for Spectre from Cadence Design Systems.",
"title": ""
},
{
"docid": "11afe3e3e94ca2ec411f38bf1b0b2e82",
"text": "The requirements engineering program at Siemens Corporate Research has been involved with process improvement, training and project execution across many of the Siemens operating companies. We have been able to observe and assist with process improvement in mainly global software development efforts. Other researchers have reported extensively on various aspects of distributed requirements engineering, but issues specific to organizational structure have not been well categorized. Our experience has been that organizational and other management issues can overshadow technical problems caused by globalization. This paper describes some of the different organizational structures we have encountered, the problems introduced into requirements engineering processes by these structures, and techniques that were effective in mitigating some of the negative effects of global software development.",
"title": ""
},
{
"docid": "d7d1da1632553a0ac5c0961c8cf9b5ac",
"text": "In this paper a monitoring system for production well based on WSN is designed, where the sensors can be used as the downhole permanent sensor to measure temperature and pressure analog signals. The analog signals are modulated digital signals by data acquisition system. The digital signals are transmitted to database server of monitoring center. Meanwhile the data can be browsed on internet or by mobile telephone, and the consumer receive alarm message when the data are overflow. The system offered manager and technician credible gist to make decision timely.",
"title": ""
},
{
"docid": "c8ef46debc31d9d7013169cdf1403542",
"text": "BACKGROUND\nThis paper reports the results of a pilot randomized controlled trial comparing the delivery modality (mobile phone/tablet or fixed computer) of a cognitive behavioural therapy intervention for the treatment of depression. The aim was to establish whether a previously validated computerized program (The Sadness Program) remained efficacious when delivered via a mobile application.\n\n\nMETHOD\n35 participants were recruited with Major Depression (80% female) and randomly allocated to access the program using a mobile app (on either a mobile phone or iPad) or a computer. Participants completed 6 lessons, weekly homework assignments, and received weekly email contact from a clinical psychologist or psychiatrist until completion of lesson 2. After lesson 2 email contact was only provided in response to participant request, or in response to a deterioration in psychological distress scores. The primary outcome measure was the Patient Health Questionnaire 9 (PHQ-9). Of the 35 participants recruited, 68.6% completed 6 lessons and 65.7% completed the 3-months follow up. Attrition was handled using mixed-model repeated-measures ANOVA.\n\n\nRESULTS\nBoth the Mobile and Computer Groups were associated with statistically significantly benefits in the PHQ-9 at post-test. At 3 months follow up, the reduction seen for both groups remained significant.\n\n\nCONCLUSIONS\nThese results provide evidence to indicate that delivering a CBT program using a mobile application, can result in clinically significant improvements in outcomes for patients with depression.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry ACTRN 12611001257954.",
"title": ""
},
{
"docid": "df044b996752beb7f0fd067d17c91199",
"text": "We introduce lemonUby, a new lexical resource integrated in the Semantic Web which is the result of converting data extracted from the existing large-scale linked lexical resource UBY to the lemon lexicon model. The following data from UBY were converted: WordNet, FrameNet, VerbNet, English and German Wiktionary, the English and German entries of OmegaWiki, as well as links between pairs of these lexicons at the word sense level (links between VerbNet and FrameNet, VerbNet and WordNet, WordNet and FrameNet, WordNet and Wiktionary, WordNet and German OmegaWiki). We linked lemonUby to other lexical resources and linguistic terminology repositories in the Linguistic Linked Open Data cloud and outline possible applications of this new dataset.",
"title": ""
},
{
"docid": "9911063e58b5c2406afd761d8826538a",
"text": "BACKGROUND\nThe purpose of our study was to evaluate inter-observer reliability of the Three-Column classifications with conventional Schatzker and AO/OTA of Tibial Plateau Fractures.\n\n\nMETHODS\n50 cases involving all kinds of the fracture patterns were collected from 278 consecutive patients with tibial plateau fractures who were internal fixed in department of Orthopedics and Trauma III in Shanghai Sixth People's Hospital. The series were arranged randomly, numbered 1 to 50. Four observers were chosen to classify these cases. Before the research, a classification training session was held to each observer. They were given as much time as they required evaluating the radiographs accurately and independently. The classification choices made at the first viewing were not available during the second viewing. The observers were not provided with any feedback after the first viewing. The kappa statistic was used to analyze the inter-observer reliability of the three fracture classification made by the four observers.\n\n\nRESULTS\nThe mean kappa values for inter-observer reliability regarding Schatzker classification was 0.567 (range: 0.513-0.589), representing \"moderate agreement\". The mean kappa values for inter-observer reliability regarding AO/ASIF classification systems was 0.623 (range: 0.510-0.710) representing \"substantial agreement\". The mean kappa values for inter-observer reliability regarding Three-Column classification systems was 0.766 (range: 0.706-0.890), representing \"substantial agreement\".\n\n\nCONCLUSION\nThree-Column classification, which is dependent on the understanding of the fractures using CT scans as well as the 3D reconstruction can identity the posterior column fracture or fragment. It showed \"substantial agreement\" in the assessment of inter-observer reliability, higher than the conventional Schatzker and AO/OTA classifications. We finally conclude that Three-Column classification provides a higher agreement among different surgeons and could be popularized and widely practiced in other clinical centers.",
"title": ""
},
{
"docid": "474572cef9f1beb875d3ae012e06160f",
"text": "Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio.",
"title": ""
},
{
"docid": "bfc85b95287e4abc2308849294384d1e",
"text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.",
"title": ""
},
{
"docid": "4c711149abc3af05a8e55e52eefddd97",
"text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.",
"title": ""
},
{
"docid": "29479201c12e99eb9802dd05cff60c36",
"text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.",
"title": ""
},
{
"docid": "24297f719741f6691e5121f33bafcc09",
"text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.",
"title": ""
},
{
"docid": "f4b5a2584833466fa26da00b07a7f261",
"text": "This paper describes the development of the technology threat avoidance theory (TTAT), which explains individual IT users’ behavior of avoiding the threat of malicious information technologies. We articulate that avoidance and adoption are two qualitatively different phenomena and contend that technology acceptance theories provide a valuable, but incomplete, understanding of users’ IT threat avoidance behavior. Drawing from cybernetic theory and coping theory, TTAT delineates the avoidance behavior as a dynamic positive feedback loop in which users go through two cognitive processes, threat appraisal and coping appraisal, to decide how to cope with IT threats. In the threat appraisal, users will perceive an IT threat if they believe that they are susceptible Alan Dennis was the accepting senior editor for this paper. to malicious IT and that the negative consequences are severe. The threat perception leads to coping appraisal, in which users assess the degree to which the IT threat can be avoided by taking safeguarding measures based on perceived effectiveness and costs of the safeguarding measure and selfefficacy of taking the safeguarding measure. TTAT posits that users are motivated to avoid malicious IT when they perceive a threat and believe that the threat is avoidable by taking safeguarding measures; if users believe that the threat cannot be fully avoided by taking safeguarding measures, they would engage in emotion-focused coping. Integrating process theory and variance theory, TTAT enhances our understanding of human behavior under IT threats and makes an important contribution to IT security research and practice.",
"title": ""
},
{
"docid": "fa8d8eda07b7045f69325670ba6aff27",
"text": "A three-axis tactile force sensor that determines the touch and slip/friction force may advance artificial skin and robotic applications by fully imitating human skin. The ability to detect slip/friction and tactile forces simultaneously allows unknown objects to be held in robotic applications. However, the functionalities of flexible devices have been limited to a tactile force in one direction due to difficulties fabricating devices on flexible substrates. Here we demonstrate a fully printed fingerprint-like three-axis tactile force and temperature sensor for artificial skin applications. To achieve economic macroscale devices, these sensors are fabricated and integrated using only printing methods. Strain engineering enables the strain distribution to be detected upon applying a slip/friction force. By reading the strain difference at four integrated force sensors for a pixel, both the tactile and slip/friction forces can be analyzed simultaneously. As a proof of concept, the high sensitivity and selectivity for both force and temperature are demonstrated using a 3×3 array artificial skin that senses tactile, slip/friction, and temperature. Multifunctional sensing components for a flexible device are important advances for both practical applications and basic research in flexible electronics.",
"title": ""
},
{
"docid": "36a66d72b0cdffb4ef272c4f3da54ba2",
"text": "Asthma is a common disease that affects 300 million people worldwide. Given the large number of eosinophils in the airways of people with mild asthma, and verified by data from murine models, asthma was long considered the hallmark T helper type 2 (TH2) disease of the airways. It is now known that some asthmatic inflammation is neutrophilic, controlled by the TH17 subset of helper T cells, and that some eosinophilic inflammation is controlled by type 2 innate lymphoid cells (ILC2 cells) acting together with basophils. Here we discuss results from in-depth molecular studies of mouse models in light of the results from the first clinical trials targeting key cytokines in humans and describe the extraordinary heterogeneity of asthma.",
"title": ""
},
{
"docid": "7c13ebe2897fc4870a152159cda62025",
"text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.",
"title": ""
},
{
"docid": "b9c40aa4c8ac9d4b6cbfb2411c542998",
"text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.",
"title": ""
}
] |
scidocsrr
|
27d8974b8d27289f8ec8f1ff9cb8def5
|
Visualization and Pruning of SSD with the base network VGG16
|
[
{
"docid": "5de0fcb624f4c14b1a0fe43c60d7d4ad",
"text": "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
"title": ""
},
{
"docid": "28c03f6fb14ed3b7d023d0983cb1e12b",
"text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"title": ""
},
{
"docid": "d1f5fd87b019027297377c1e6f8fa578",
"text": "Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank tensor decomposition for removing the redundancy in the convolution kernels. The algorithm finds the exact global optimizer of the decomposition and is more effective than iterative methods. Based on the decomposition, we further propose a new method for training low-rank constrained CNNs from scratch. Interestingly, while achieving a significant speedup, sometimes the lowrank constrained CNNs delivers significantly better performance than their nonconstrained counterparts. On the CIFAR-10 dataset, the proposed low-rank NIN model achieves 91.31% accuracy (without data augmentation), which also improves upon state-of-the-art result. We evaluated the proposed method on CIFAR10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet, NIN, VGG and GoogleNet with success. For example, the forward time of VGG16 is reduced by half while the performance is still comparable. Empirical success suggests that low-rank tensor decompositions can be a very useful tool for speeding up large CNNs.",
"title": ""
},
{
"docid": "3e58500462ad0efdab8ee660a1a02b77",
"text": "I We train CNNs with composite layers of oriented low-rank filters, of which the network learns the most effective linear combination I In effect our networks learn a basis space for filters, based on simpler low-rank filters I We propose an initialization for composite layers of heterogeneous filters, to train such networks from scratch I Our models are faster and use less parameters I With a small number of full filters, our models also generalize better",
"title": ""
}
] |
[
{
"docid": "5da2d6895eeee2edfc4f8c75f807b8e3",
"text": "Traditional Chinese Medicine (TCM) is a range of medical practices used in China for more than four millenniums, a treasure of Chinese people (Lukman, He, & Hui, 2007). The important role of TCM and its profound influence on the health care system in China is well recognized. The West also has drawn the attention towards various aspects of TCM in the past few years (Chan, 1995). TCM consists of a systematized methodology of medical treatment and diagnosis (Watsuji, Arita, Shinohara, & Kitade, 1999). According to the basic concept of TCM, the different body-parts, zang-viscera and fu-viscera, the meridians of the body are linked as an inseparable whole. The inner abnormality can present on outer parts, while the outer disease can turn into the inner parts (Bakshi & Pal, 2010). Therefore, some diseases can be diagnosed from the appearance of the outer body. As the significant component part of TCM theory, TCM diagnostics includes two parts: TCM Sizhen (the four diagnosis methods) and differentiation of syndromes. The TCM physician experience the gravity of health condition of a sick person by means of the four diagnosis methods depending on the doctor's body \"sensors\" such as fingers, eyes, noses etc. Generally, TCM Sizhen consists of the following four diagnostic processes: inspection, auscultation and olfaction, inquiry, and pulse feeling and palpation (Nenggan & Zhaohui, 2004). In the inspection diagnostic process, TCM practitioners observe abnormal changes in the patient's vitality, colour, appearance, secretions and excretions. The vital signs encompass eyes, tongue, facial expressions, general and body surface appearance. The inter-relationship between the external part of the body such as face and tongue and the internal organ(s) is used to assist TCM doctors to predict the pathological changes of internal organs. In the auscultation and olfaction process, the doctor listen the patient's voice, breathing, and coughing used to judge the pathological changes in the interior of the patient's body. Inquiry diagnosis method is refer to query patient's family history, feelings in various aspects, such as chills and fever, perspiration, appetite and thirst, as well as pain in terms of its nature and locality. Palpation approach involves pulse diagnosis (Siu Cheung, Yulan, & Doan Thi Cam, 2007). The palpation diagnosis has been accepted as one of the most powerful method to give information for making diagnosis from ancient time till now. The pulse waves are measured at six points near the wrists of both hands. The waves are different each other and give us information about different organs (Hsing-Lin, Suzuki, Adachi, & Umeno, 1993). Tongue diagnosis is another inspection diagnostic method which",
"title": ""
},
{
"docid": "02234d239a2150182bc149d285b6c5a4",
"text": "The development and standardization of semantic web technologies has resulted in an unprecedented volume of data being published on the Web as Linked Data (LD). However, we observe widely varying data quality ranging from extensively curated datasets to crowdsourced and extracted data of relatively low quality. In this article, we present the results of a systematic review of approaches for assessing the quality of LD. We gather existing approaches and analyze them qualitatively. In particular, we unify and formalize commonly used terminologies across papers related to data quality and provide a comprehensive list of 18 quality dimensions and 69 metrics. Additionally, we qualitatively analyze the 30 core approaches and 12 tools using a set of attributes. The aim of this article is to provide researchers and data curators a comprehensive understanding of existing work, thereby encouraging further experimentation and development of new approaches focused towards data quality, specifically for LD.",
"title": ""
},
{
"docid": "09e19f24675cb22638df3f82d07686ac",
"text": "This letter discusses a miniature low profile ultrawideband (UWB) spiral. The antenna is miniaturized using a combination of dielectric and inductive loading. In addition, a ferrite coated ground plane is adopted in place of the traditional metallic ground plane for profile reduction. Using full-wave simulations and measurements, it is shown that the miniaturized spiral can achieve similar performance to a traditional planar spiral twice its size.",
"title": ""
},
{
"docid": "4182770927ae68e5047906df446bafe9",
"text": "In this study, a square-shaped slot antenna is designed for the future fifth generation (5G) wireless applications. The antenna has a compact size of 0.64λg × 0.64λg at 38 GHz, which consists of ellipse shaped radiating patch fed by a 50 Q micro-strip line on the Rogers RT5880 substrates. A rectangle shaped slot is etched in the ground plane to enhance the antenna bandwidth. In order to obtain better impedance matching bandwidth of the antennas, some small circular radiating patches are added to the square-shaped slot. Simulations show that the measured impedance bandwidth of the proposed antenna ranges from 20 to 42 GHz for a reflection coefficient of Su less than −10dB which is cover 5G bands (28/38GHz). The proposed antenna provides almost omni-directional patterns, relatively flat gain, and high radiation efficiency through the frequency band.",
"title": ""
},
{
"docid": "62999806021ff2533ddf7f06117f7d1a",
"text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.",
"title": ""
},
{
"docid": "1b9d74a2f720a75eec5d94736668390e",
"text": "Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing a wealth of information for sensitive and specific diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a dataset of unprecedented size, consisting of 4,875 subjects with 93,500 pixelwise annotated images, which is by far the largest annotated CMR dataset. By combining FCN with a large-scale annotated dataset, we show for the first time that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinical measures. We anticipate this to be a starting point for automated and comprehensive CMR analysis with human-level performance, facilitated by machine learning. It is an important advance on the pathway towards computer-assisted CVD assessment. An estimated 17.7 million people died from cardiovascular diseases (CVDs) in 2015, representing 31% of all global deaths [1]. More people die annually from CVDs than any other cause. Technological advances in medical imaging have led to a number of options for non-invasive investigation of CVDs, including echocardiography, computed tomography (CT), cardiovascular magnetic resonance (CMR) etc., each having its own advantages and disadvantages. Due to its good image quality, excellent soft tissue contrast and absence of ionising radiation, CMR has established itself as the gold standard for assessing cardiac chamber volume and mass for a wide range of CVDs [2–4]. To derive quantitative measures such as volume and mass, clinicians have been relying on manual approaches to trace the cardiac chamber contours. It typically takes a trained",
"title": ""
},
{
"docid": "c6bfdc5c039de4e25bb5a72ec2350223",
"text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.",
"title": ""
},
{
"docid": "d4e652097c6e3b7c265adf4848471d19",
"text": "The usage of Unmanned Aerial Vehicles (UAVs) is increasing day by day. In recent years, UAVs are being used in increasing number of civil applications, such as policing, fire-fighting, etc in addition to military applications. Instead of using one large UAV, multiple UAVs are nowadays used for higher coverage area and accuracy. Therefore, networking models are required to allow two or more UAV nodes to communicate directly or via relay node(s). Flying Ad-Hoc Networks (FANETs) are formed which is basically an ad hoc network for UAVs. This is relatively a new technology in network family where requirements vary largely from traditional networking model, such as Mobile Ad-hoc Networks and Vehicular Ad-hoc Networks. In this paper, Flying Ad-Hoc Networks are surveyed along with its challenges compared to traditional ad hoc networks. The existing routing protocols for FANETs are then classified into six major categories which are critically analyzed and compared based on various performance criteria. Our comparative analysis will help network engineers in choosing appropriate routing protocols based on the specific scenario where the FANET will be deployed.",
"title": ""
},
{
"docid": "ae19bd4334434cfb8c5ac015dc8d3bd4",
"text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.",
"title": ""
},
{
"docid": "bc892fe2a369f701e0338085eaa0bdbd",
"text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.",
"title": ""
},
{
"docid": "a161b0fe0b38381a96f02694fd84c3bf",
"text": "We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.",
"title": ""
},
{
"docid": "7ddfa92cee856e2ef24caf3e88d92b93",
"text": "Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.",
"title": ""
},
{
"docid": "fe043223b37f99419d9dc2c4d787cfbb",
"text": "We describe a Markov chain Monte Carlo based particle filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and/or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint particle filter. Since a joint particle filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the particle filter with an MCMC sampling step. The resulting filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.",
"title": ""
},
{
"docid": "f584b2d89bacacf31158496460d6f546",
"text": "Significant advances in clinical practice as well as basic and translational science were presented at the American Transplant Congress this year. Topics included innovative clinical trials to recent advances in our basic understanding of the scientific underpinnings of transplant immunology. Key areas of interest included the following: clinical trials utilizing hepatitis C virus-positive (HCV+ ) donors for HCV- recipients, the impact of the new allocation policies, normothermic perfusion, novel treatments for desensitization, attempts at precision medicine, advances in xenotransplantation, the role of mitochondria and exosomes in rejection, nanomedicine, and the impact of the microbiota on transplant outcomes. This review highlights some of the most interesting and noteworthy presentations from the meeting.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "9c16bf2fb7ceba2bf872ca3d1475c6d9",
"text": "Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.",
"title": ""
},
{
"docid": "75f27cbf706e0552400ce3ce4319ac84",
"text": "Strategy scholars have used the notion of the Business Model to refer to the ‘logic of the firm’ e how it operates and creates value for its stakeholders. On the surface, this notion appears to be similar to that of strategy. We present a conceptual framework to separate and relate the concepts of strategy and business model: a business model, we argue, is a reflection of the firm’s realized strategy. We find that in simple competitive situations there is a one-to-one mapping between strategy and business model, which makes it difficult to separate the two notions. We show that the concepts of strategy and business model differ when there are important contingencies on which a well-designed strategy must be based. Our framework also delivers a clear distinction between strategy and tactics, made possible because strategy and business model are different constructs.",
"title": ""
},
{
"docid": "ab05c141b9d334f488cfb08ad9ed2137",
"text": "Cellular communications are undergoing significant evolutions in order to accommodate the load generated by increasingly pervasive smart mobile devices. Dynamic access network adaptation to customers' demands is one of the most promising paths taken by network operators. To that end, one must be able to process large amount of mobile traffic data and outline the network utilization in an automated manner. In this paper, we propose a framework to analyze broad sets of Call Detail Records (CDRs) so as to define categories of mobile call profiles and classify network usages accordingly. We evaluate our framework on a CDR dataset including more than 300 million calls recorded in an urban area over 5 months. We show how our approach allows to classify similar network usage profiles and to tell apart normal and outlying call behaviors.",
"title": ""
},
{
"docid": "c27e6b7be1a5d00632bbbea64b2516ad",
"text": "Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.",
"title": ""
},
{
"docid": "8f8413bb28c3a2c23f28be301aacafb6",
"text": "In this paper we propose a new scheme based on adaptive critics for finding online the state feedback, infinite horizon, optimal control solution of linear continuous-time systems using only partial knowledge regarding the system dynamics. In other words, the algorithm solves online an algebraic Riccati equation without knowing the internal dynamics model of the system. Being based on a policy iteration technique, the algorithm alternates between the policy evaluation and policy update steps until an update of the control policy will no longer improve the system performance. The result is a direct adaptive control algorithm which converges to the optimal control solution without using an explicit, a priori obtained, model of the system internal dynamics. The effectiveness of the algorithm is shown while finding the optimal-load-frequency controller for a power system. © 2008 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
85ca87f997d732e5e7f8dbf39eeb0bb4
|
A BERT Baseline for the Natural Questions
|
[
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
}
] |
[
{
"docid": "f15cb62cb81b71b063d503eb9f44d7c5",
"text": "This study presents an improved krill herd (IKH) approach to solve global optimization problems. The main improvement pertains to the exchange of information between top krill during motion calculation process to generate better candidate solutions. Furthermore, the proposed IKH method uses a new Lévy flight distribution and elitism scheme to update the KH motion calculation. This novel meta-heuristic approach can accelerate the global convergence speed while preserving the robustness of the basic KH algorithm. Besides, the detailed implementation procedure for the IKH method is described. Several standard benchmark functions are used to verify the efficiency of IKH. Based on the results, the performance of IKH is superior to or highly competitive with the standard KH and other robust population-based optimization methods. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a5c0ad9c841245e57bb71b19b4ad24b1",
"text": "HTTP video streaming, such as Flash video, is widely deployed to deliver stored media. Owing to TCP's reliable service, the picture and sound quality would not be degraded by network impairments, such as high delay and packet loss. However, the network impairments can cause rebuffering events which would result in jerky playback and deform the video's temporal structure. These quality degradations could adversely affect users' quality of experience (QoE). In this paper, we investigate the relationship among three levels of quality of service (QoS) of HTTP video streaming: network QoS, application QoS, and user QoS (i.e., QoE). Our ultimate goal is to understand how the network QoS affects the QoE of HTTP video streaming. Our approach is to first characterize the correlation between the application and network QoS using analytical models and empirical evaluation. The second step is to perform subjective experiments to evaluate the relationship between application QoS and QoE. Our analysis reveals that the frequency of rebuffering is the main factor responsible for the variations in the QoE.",
"title": ""
},
{
"docid": "a0306096725c0d4b6bdd648bfa396f13",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "5bd61380b9b05b3e89d776c6cbeb0336",
"text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "317b77f643ad75034f5f2fe516e8aef8",
"text": "This paper addresses the problem of recognizing multiple rigid objects that are common to two images. We propose a generic algorithm that allows to simultaneously decide if one or several objects are common to the two images and to estimate the corresponding geometric transformations. The considered transformations include similarities, homographies and epipolar geometry. We first propose a generalization of an a contrario formulation of the RANSAC algorithm proposed in [6]. We then introduce an algorithm for the detection of multiple transformations between images and show its efficiency on various experiments.",
"title": ""
},
{
"docid": "1aa73c76f121f01a5ee1a7ced788841f",
"text": "In recent years Educational Data Mining (EDM) has emerged as a new field of research due to the development of several statistical approaches to explore data in educational context. One such application of EDM is early prediction of student results. This is necessary in higher education for identifying the "weak" students so that some form of remediation may be organized for them. In this paper a set of attributes are first defined for a group of students majoring in Computer Science in some undergraduate colleges in Kolkata. Since the numbers of attributes are reasonably high, feature selection algorithms are applied on the data set to reduce the number of features. Five classes of Machine Learning Algorithm (MLA) are then applied on this data set and it was found that the best results were obtained with the decision tree class of algorithms. It was also found that the prediction results obtained with this model are comparable with other previously developed models.",
"title": ""
},
{
"docid": "3e714d9c4b0bb0684d1810a809f5f53f",
"text": "For very small software development companies, the quality of their software products is a key to competitive advantage. However, the usage of Software Engineering standards is extremely low amongst such very small software companies. A primary reason cited by many such companies for this lack of quality standards adoption is the perception that they have been developed for large multi-national software companies and not with small and very small organizations in mind and are therefore not suitable for their specific needs. This paper describes an innovative systematic approach to the development of the software process lifecycle standard for very small entities ISO/IEC 29110, following the Rogers model of the Innovation-Development process. The ISO/IEC 29110 standard is unique amongst software and systems engineering standards, in that the working group mandated to develop a new standard approached industry to conduct a needs assessment and gather actual requirements for a new standard as part of the standards development process. This paper presents a unique insight from the perspective of some of the standards authors on the development of the ISO/IEC 29110 standard, including the rationale behind its development and the innovative design of implementation guides to assist very small companies in adopting the standards, as well outlining a pilot project scheme for usage in early trials of this standard.",
"title": ""
},
{
"docid": "21c4a6bb8fee4e403c6cd384e1e423be",
"text": "Fault detection prediction of FAB (wafer fabrication) process in semiconductor manufacturing process is possible that improve product quality and reliability in accordance with the classification performance. However, FAB process is sometimes due to a fault occurs. And mostly it occurs “pass”. Hence, data imbalance occurs in the pass/fail class. If the data imbalance occurs, prediction models are difficult to predict “fail” class because increases the bias of majority class (pass class). In this paper, we propose the SMOTE (Synthetic Minority Oversampling Technique) based over sampling method for solving problem of data imbalance. The proposed method solve the imbalance of the between pass and fail by oversampling the minority class of fail. In addition, by applying the fault detection prediction model to measure the performance.",
"title": ""
},
{
"docid": "374b87b187fbc253477cd1e8f60e9d91",
"text": "Term Used Definition Provided Source I/T strategy None provided Henderson and Venkatraman 1999 Information Management Strategy \" A long-term precept for directing, implementing and supervising information management \" (information management left undefined) Reponen 1994 (p. 30) \" Deals with management of the entire information systems function, \" referring to Earl (1989, p. 117): \" the management framework which guides how the organization should run IS/IT activities \" Ragu-Nathan et al. 2001 (p. 269)",
"title": ""
},
{
"docid": "da7d4ddac422712f36b97af1783d7eb7",
"text": "A single image captures the appearance and position of multiple entities in a scene as well as their complex interactions. As a consequence, natural language grounded in visual contexts tends to be diverse – with utterances differing as focus shifts to specific objects, interactions, or levels of detail. Recently, neural sequence models such as RNNs and LSTMs have been employed to produce visually-grounded language. Beam Search, the standard work-horse for decoding sequences from these models, is an approximate inference algorithm that decodes the top-B sequences in a greedy left-to-right fashion. In practice, the resulting sequences are often minor rewordings of a common utterance, failing to capture the multimodal nature of source images. To address this shortcoming, we propose Diverse Beam Search (DBS), a diversity promoting alternative to BS for approximate inference. DBS produces sequences that are significantly different from each other by incorporating diversity constraints within groups of candidate sequences during decoding; moreover, it achieves this with minimal computational or memory overhead. We demonstrate that our method improves both diversity and quality of decoded sequences over existing techniques on two visually-grounded language generation tasks – image captioning and visual question generation – particularly on complex scenes containing diverse visual content. We also show similar improvements at language-only machine translation tasks, highlighting the generality of our approach.",
"title": ""
},
{
"docid": "56d7a5e46456b4dd0e41a7af3e26a11c",
"text": "We develop an augmented reality-based app that resides on the attacker's smartphone and leverages computer vision and raw input data to provide real-time mimicry attack guidance on the victim's phone. Our approach does not require tampering or installing software on the victim's device, or specialized hardware. The app is demonstrated by attacking keystroke dynamics, a method leveraging the unique typing behaviour of users to authenticate them on a smartphone, which was previously thought to be hard to mimic. In addition, we propose a low-tech AR-like audiovisual method based on spatial pointers on a transparent film and audio cues. We conduct experiments with 31 participants and mount over 400 attacks to show that our methods enable attackers to successfully bypass keystroke dynamics for 87% of the attacks after an average mimicry training of four minutes. Our AR-based method can be extended to attack other input behaviour-based biometrics. While the particular attack we describe is relatively narrow, it is a good example of using AR guidance to enable successful mimicry of user behaviour---an approach of increasing concern as AR functionality becomes more commonplace.",
"title": ""
},
{
"docid": "e19743c3b2402090f9647f669a14d554",
"text": "To investigate the relation between vocal prosody and change in depression severity over time, 57 participants from a clinical trial for treatment of depression were evaluated at seven-week intervals using a semistructured clinical interview for depression severity (Hamilton Rating Scale for Depression (HRSD)). All participants met criteria for major depressive disorder (MDD) at week one. Using both perceptual judgments by naive listeners and quantitative analyses of vocal timing and fundamental frequency, three hypotheses were tested: 1) Naive listeners can perceive the severity of depression from vocal recordings of depressed participants and interviewers. 2) Quantitative features of vocal prosody in depressed participants reveal change in symptom severity over the course of depression. 3) Interpersonal effects occur as well; such that vocal prosody in interviewers shows corresponding effects. These hypotheses were strongly supported. Together, participants' and interviewers' vocal prosody accounted for about 60 percent of variation in depression scores, and detected ordinal range of depression severity (low, mild, and moderate-to-severe) in 69 percent of cases (kappa = 0.53). These findings suggest that analysis of vocal prosody could be a powerful tool to assist in depression screening and monitoring over the course of depressive disorder and recovery.",
"title": ""
},
{
"docid": "d43f56f13fee5b45cb31233e61aa20d0",
"text": "An automated brain tumor segmentation method was developed and validated against manual segmentation with three-dimensional magnetic resonance images in 20 patients with meningiomas and low-grade gliomas. The automated method (operator time, 5-10 minutes) allowed rapid identification of brain and tumor tissue with an accuracy and reproducibility comparable to those of manual segmentation (operator time, 3-5 hours), making automated segmentation practical for low-grade gliomas and meningiomas.",
"title": ""
},
{
"docid": "d3b0bc464f9681c8261a268a4a957e16",
"text": "The composition of amorphous oxide semiconductors, which are well known for their optical transparency, can be tailored to enhance their absorption and induce photoconductivity for irradiation with green, and shorter wavelength light. In principle, amorphous oxide semiconductor-based thin-film photoconductors could hence be applied as photosensors. However, their photoconductivity persists for hours after illumination has been removed, which severely degrades the response time and the frame rate of oxide-based sensor arrays. We have solved the problem of persistent photoconductivity (PPC) by developing a gated amorphous oxide semiconductor photo thin-film transistor (photo-TFT) that can provide direct control over the position of the Fermi level in the active layer. Applying a short-duration (10 ns) voltage pulse to these devices induces electron accumulation and accelerates their recombination with ionized oxygen vacancy sites, which are thought to cause PPC. We have integrated these photo-TFTs in a transparent active-matrix photosensor array that can be operated at high frame rates and that has potential applications in contact-free interactive displays.",
"title": ""
},
{
"docid": "ae8ad19049574cd52106e0df51cc4e68",
"text": "In the domain of e-health, there are diverse and heterogeneous health care systems with different brands on various platforms. One of the most important challenges in this field is the interoperability which plays a key role on information exchange and sharing. Achieving the interoperability is a difficult task because of complexity and diversity of systems, standards, and kinds of information. The lack of interoperability would lead to increase costs and errors of medical operation in hospitals. The purpose of this article is to present a conceptual model for solving interoperability in health information systems. A Health Service Bus (HSB) as an integrated infrastructure is suggested to facilitate Service Oriented Architecture. A scenario-based evaluation on the proposed conceptual model shows that adopting web service technology is an effective way for this task.",
"title": ""
},
{
"docid": "f5c4bdf959e455193221a1fa76e1895a",
"text": "This book contains a wide variety of hot topics on advanced computational intelligence methods which incorporate the concept of complex and hypercomplex number systems into the framework of artificial neural networks. In most chapters, the theoretical descriptions of the methodology and its applications to engineering problems are excellently balanced. This book suggests that a better information processing method could be brought about by selecting a more appropriate information representation scheme for specific problems, not only in artificial neural networks but also in other computational intelligence frameworks. The advantages of CVNNs and hypercomplex-valued neural networks over real-valued neural networks are confirmed in some case studies but still unclear in general. Hence, there is a need to further explore the difference between them from the viewpoint of nonlinear dynamical systems. Nevertheless, it seems that the applications of CVNNs and hypercomplex-valued neural networks are very promising.",
"title": ""
},
{
"docid": "b819c10fb84e576cb6444023246b91b0",
"text": "BCAAs (leucine, isoleucine, and valine), particularly leucine, have anabolic effects on protein metabolism by increasing the rate of protein synthesis and decreasing the rate of protein degradation in resting human muscle. Also, during recovery from endurance exercise, BCAAs were found to have anabolic effects in human muscle. These effects are likely to be mediated through changes in signaling pathways controlling protein synthesis. This involves phosphorylation of the mammalian target of rapamycin (mTOR) and sequential activation of 70-kD S6 protein kinase (p70 S6 kinase) and the eukaryotic initiation factor 4E-binding protein 1. Activation of p70 S6 kinase, and subsequent phopsphorylation of the ribosomal protein S6, is associated with enhanced translation of specific mRNAs. When BCAAs were supplied to subjects during and after one session of quadriceps muscle resistance exercise, an increase in mTOR, p70 S6 kinase, and S6 phosphorylation was found in the recovery period after the exercise with no effect of BCAAs on Akt or glycogen synthase kinase 3 (GSK-3) phosphorylation. Exercise without BCAA intake led to a partial phosphorylation of p70 S6 kinase without activating the enzyme, a decrease in Akt phosphorylation, and no change in GSK-3. It has previously been shown that leucine infusion increases p70 S6 kinase phosphorylation in an Akt-independent manner in resting subjects; however, a relation between mTOR and p70 S6 kinase has not been reported previously. The results suggest that BCAAs activate mTOR and p70 S6 kinase in human muscle in the recovery period after exercise and that GSK-3 is not involved in the anabolic action of BCAAs on human muscle. J. Nutr. 136: 269S–273S, 2006.",
"title": ""
},
{
"docid": "3125f7ce9487002b1ae00b254ba45753",
"text": "One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.",
"title": ""
},
{
"docid": "d69796a05afd0c0207ac7a60e68de5d3",
"text": "Onychomatricoma is a subungual tumor that is clinically characterized by banded or diffuse thickening, yellowish discoloration, splinter hemorrhages, and transverse overcurvature of the nail plate. It is often and easily misdiagnosed because the condition is not well known and rarely has been described. We report the case of a 45-year-old man with a subungual tumor on the right index finger that fulfills the criteria of onychomatricoma.",
"title": ""
}
] |
scidocsrr
|
abc40531e6679c59119d9ab9d273b6f4
|
A mechanistic performance model for superscalar out-of-order processors
|
[
{
"docid": "99511c1267d396d3745f075a40a06507",
"text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …",
"title": ""
}
] |
[
{
"docid": "3c14ce0d697c69f554a842c1dc997d66",
"text": "We propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that has both convolutional and deconvolutional layers, and combines feature extraction and segmentation prediction in a single model. The joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types. In contrast to existing automatic feature learning approaches, which are typically patch-based, our model learns features from entire images, which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training. Our network also uses a novel objective function that works well for segmenting underrepresented classes, such as MS lesions. We have evaluated our method on the publicly available labeled cases from the MS lesion segmentation challenge 2008 data set, showing that our method performs comparably to the state-of-theart. In addition, we have evaluated our method on the images of 500 subjects from an MS clinical trial and varied the number of training samples from 5 to 250 to show that the segmentation performance can be greatly improved by having a representative data set.",
"title": ""
},
{
"docid": "d3c91b43a4ac5b50f2faa02811616e72",
"text": "BACKGROUND\nSleep disturbance is common among disaster survivors with posttraumatic stress symptoms but is rarely addressed as a primary therapeutic target. Sleep Dynamic Therapy (SDT), an integrated program of primarily evidence-based, nonpharmacologic sleep medicine therapies coupled with standard clinical sleep medicine instructions, was administered to a large group of fire evacuees to treat posttraumatic insomnia and nightmares and determine effects on posttraumatic stress severity.\n\n\nMETHOD\nThe trial was an uncontrolled, prospective pilot study of SDT for 66 adult men and women, 10 months after exposure to the Cerro Grande Fire. SDT was provided to the entire group in 6, weekly, 2-hour sessions. Primary and secondary outcomes included validated scales for insomnia, nightmares, posttraumatic stress, anxiety, and depression, assessed at 2 pretreatment baselines on average 8 weeks apart, weekly during treatment, posttreatment, and 12-week follow-up.\n\n\nRESULTS\nSixty-nine participants completed both pretreatment assessment, demonstrating small improvement in symptoms prior to starting SDT. Treatment and posttreatment assessments were completed by 66 participants, and 12-week follow-up was completed by 59 participants. From immediate pretreatment (second baseline) to posttreatment, all primary and secondary scales decreased significantly (all p values < .0001) with consistent medium-sized effects (Cohen's d = 0.29 to 1.09), and improvements were maintained at follow-up. Posttraumatic stress disorder subscales demonstrated similar changes: intrusion (d = 0.56), avoidance (d = 0.45), and arousal (d = 0.69). Fifty-three patients improved, 10 worsened, and 3 reported no change in posttraumatic stress.\n\n\nCONCLUSION\nIn an uncontrolled pilot study, chronic sleep symptoms in fire disaster evacuees were treated with SDT, which was associated with substantive and stable improvements in sleep disturbance, posttraumatic stress, anxiety, and depression 12 weeks after initiating treatment.",
"title": ""
},
{
"docid": "281c64b492a1aff7707dbbb5128799c8",
"text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.",
"title": ""
},
{
"docid": "77238fd26d6f4f5a16ca9c3bbf71fda6",
"text": "Decades of research have repeatedly shown that people perform poorly at estimating and understanding conditional probabilities that are inherent in Bayesian reasoning problems. Yet in the medical domain, both physicians and patients make daily, life-critical judgments based on conditional probability. Although there have been a number of attempts to develop more effective ways to facilitate Bayesian reasoning, reports of these findings tend to be inconsistent and sometimes even contradictory. For instance, the reported accuracies for individuals being able to correctly estimate conditional probability range from 6% to 62%. In this work, we show that problem representation can significantly affect accuracies. By controlling the amount of information presented to the user, we demonstrate how text and visualization designs can increase overall accuracies to as high as 77%. Additionally, we found that for users with high spatial ability, our designs can further improve their accuracies to as high as 100%. By and large, our findings provide explanations for the inconsistent reports on accuracy in Bayesian reasoning tasks and show a significant improvement over existing methods. We believe that these findings can have immediate impact on risk communication in health-related fields.",
"title": ""
},
{
"docid": "80c44d61e019f6858326fb9c5753c700",
"text": "This paper develops an Audio-Visual Speech Recognition (AVSR) method, by (1) exploring high-performance visual features, (2) applying audio and visual deep bottleneck features to improve AVSR performance, and (3) investigating effectiveness of voice activity detection in a visual modality. In our approach, many kinds of visual features are incorporated, subsequently converted into bottleneck features by deep learning technology. By using proposed features, we successfully achieved 73.66% lipreading accuracy in speaker-independent open condition, and about 90% AVSR accuracy on average in noisy environments. In addition, we extracted speech segments from visual features, resulting 77.80% lipreading accuracy. It is found VAD is useful in both audio and visual modalities, for better lipreading and AVSR.",
"title": ""
},
{
"docid": "d2e91eb39a06cb58fc847784a7e327d7",
"text": "Guided by an initial idea of building a complex (non linear) d ecision surface with maximalocal margin in input space, we give a possible geometrical intuition as to why K-Nearest Neighbor (KNN) al gorithms often perform more poorly than SVMs on classification tasks. We then propose modified K-Nearest Neighbor algorithms to overcome the perceived problem. The approach is similar in spirit to Tangent Distance , but with invariances inferred from the local neighborhood rath er than prior knowledge. Experimental results on real world classificati on asks suggest that the modified KNN algorithms often give a dramatic im provement over standard KNN and perform as well or better than SVMs .",
"title": ""
},
{
"docid": "5fd10b2277918255133f2e37a55e1103",
"text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.",
"title": ""
},
{
"docid": "725e826f13a17fe73369e85733431e32",
"text": "This study aims to explore the determinants influencing usage intention in mobile social media from the user motivation and the Theory of Planned Behavior (TPB) perspectives. Based on TPB, this study added three motivations, namely entertainment, sociality, and information, into the TPB model, and further examined the moderating effect of posters and lurkers in the relationships of the proposed model. A structural equation modeling was used and 468 LINE users in Taiwan were investigated. The results revealed that entertainment, sociality, and information are positively associated with behavioral attitude. Moreover, behavioral attitude, subjective norms, and perceived behavioral control are positively associated with usage intention. Furthermore, posters likely post messages on the LINE because of entertainment, sociality, and information, but they are not significantly subject to subjective norms. In contrast, lurkers tend to read, not write messages on the LINE because of entertainment and information rather than sociality and perceived behavioral control.",
"title": ""
},
{
"docid": "872370f375d779435eb098571f3ab763",
"text": "The aim of this study was to explore the potential of fused-deposition 3-dimensional printing (FDM 3DP) to produce modified-release drug loaded tablets. Two aminosalicylate isomers used in the treatment of inflammatory bowel disease (IBD), 5-aminosalicylic acid (5-ASA, mesalazine) and 4-aminosalicylic acid (4-ASA), were selected as model drugs. Commercially produced polyvinyl alcohol (PVA) filaments were loaded with the drugs in an ethanolic drug solution. A final drug-loading of 0.06% w/w and 0.25% w/w was achieved for the 5-ASA and 4-ASA strands, respectively. 10.5mm diameter tablets of both PVA/4-ASA and PVA/5-ASA were subsequently printed using an FDM 3D printer, and varying the weight and densities of the printed tablets was achieved by selecting the infill percentage in the printer software. The tablets were mechanically strong, and the FDM 3D printing was shown to be an effective process for the manufacture of the drug, 5-ASA. Significant thermal degradation of the active 4-ASA (50%) occurred during printing, however, indicating that the method may not be appropriate for drugs when printing at high temperatures exceeding those of the degradation point. Differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) of the formulated blends confirmed these findings while highlighting the potential of thermal analytical techniques to anticipate drug degradation issues in the 3D printing process. The results of the dissolution tests conducted in modified Hank's bicarbonate buffer showed that release profiles for both drugs were dependent on both the drug itself and on the infill percentage of the tablet. Our work here demonstrates the potential role of FDM 3DP as an efficient and low-cost alternative method of manufacturing individually tailored oral drug dosage, and also for production of modified-release formulations.",
"title": ""
},
{
"docid": "d786d4cb7b57885bc0bb2c2bfd892336",
"text": "Problem statement: Clustering is one of the most important research ar eas in the field of data mining. Clustering means creating groups of ob jects based on their features in such a way that th e objects belonging to the same groups are similar an d those belonging to different groups are dissimila r. Clustering is an unsupervised learning technique. T he main advantage of clustering is that interesting patterns and structures can be found directly from very large data sets with little or none of the background knowledge. Clustering algorithms can be applied in many domains. Approach: In this research, the most representative algorithms K-Mean s and K-Medoids were examined and analyzed based on their basic approach. The best algorithm i n each category was found out based on their performance. The input data points are generated by two ways, one by using normal distribution and another by applying uniform distribution. Results: The randomly distributed data points were taken as input to these algorithms and clusters are found ou t for each algorithm. The algorithms were implement ed using JAVA language and the performance was analyze d based on their clustering quality. The execution time for the algorithms in each category was compar ed for different runs. The accuracy of the algorith m was investigated during different execution of the program on the input data points. Conclusion: The average time taken by K-Means algorithm is greater than the time taken by K-Medoids algorithm for both the case of normal and uniform distributions. The r esults proved to be satisfactory.",
"title": ""
},
{
"docid": "931969dc54170c203db23f55b45dfa38",
"text": "The popularity and influence of reviews, make sites like Yelp ideal targets for malicious behaviors. We present Marco, a novel system that exploits the unique combination of social, spatial and temporal signals gleaned from Yelp, to detect venues whose ratings are impacted by fraudulent reviews. Marco increases the cost and complexity of attacks, by imposing a tradeoff on fraudsters, between their ability to impact venue ratings and their ability to remain undetected. We contribute a new dataset to the community, which consists of both ground truth and gold standard data. We show that Marco significantly outperforms state-of-the-art approaches, by achieving 94% accuracy in classifying reviews as fraudulent or genuine, and 95.8% accuracy in classifying venues as deceptive or legitimate. Marco successfully flagged 244 deceptive venues from our large dataset with 7,435 venues, 270,121 reviews and 195,417 users. Furthermore, we use Marco to evaluate the impact of Yelp events, organized for elite reviewers, on the hosting venues. We collect data from 149 Yelp elite events throughout the US. We show that two weeks after an event, twice as many hosting venues experience a significant rating boost rather than a negative impact. © 2015 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 0: 000–000, 2015",
"title": ""
},
{
"docid": "57c8b69c18b5b2c38552295f8e8789d5",
"text": "In many safety-critical applications such as autonomous driving and surgical robots, it is desirable to obtain prediction uncertainties from object detection modules to help support safe decision-making. Specifically, such modules need to estimate the probability of each predicted object in a given region and the confidence interval for its bounding box. While recent Bayesian deep learning methods provide a principled way to estimate this uncertainty, the estimates for the bounding boxes obtained using these methods are uncalibrated. In this paper, we address this problem for the single-object localization task by adapting an existing technique for calibrating regression models. We show, experimentally, that the resulting calibrated model obtains more reliable uncertainty estimates.",
"title": ""
},
{
"docid": "21e35c773ac9b9300f6df44854fcd141",
"text": "Time is a fundamental domain of experience. In this paper we ask whether aspects of language and culture affect how people think about this domain. Specifically, we consider whether English and Mandarin speakers think about time differently. We review all of the available evidence both for and against this hypothesis, and report new data that further support and refine it. The results demonstrate that English and Mandarin speakers do think about time differently. As predicted by patterns in language, Mandarin speakers are more likely than English speakers to think about time vertically (with earlier time-points above and later time-points below).",
"title": ""
},
{
"docid": "0618e88e1319a66cd7f69db491f78aca",
"text": "The rich dependency structure found in the columns of real-world relational databases can be exploited to great advantage, but can also cause query optimizers---which usually assume that columns are statistically independent---to underestimate the selectivities of conjunctive predicates by orders of magnitude. We introduce CORDS, an efficient and scalable tool for automatic discovery of correlations and soft functional dependencies between columns. CORDS searches for column pairs that might have interesting and useful dependency relations by systematically enumerating candidate pairs and simultaneously pruning unpromising candidates using a flexible set of heuristics. A robust chi-squared analysis is applied to a sample of column values in order to identify correlations, and the number of distinct values in the sampled columns is analyzed to detect soft functional dependencies. CORDS can be used as a data mining tool, producing dependency graphs that are of intrinsic interest. We focus primarily on the use of CORDS in query optimization. Specifically, CORDS recommends groups of columns on which to maintain certain simple joint statistics. These \"column-group\" statistics are then used by the optimizer to avoid naive selectivity estimates based on inappropriate independence assumptions. This approach, because of its simplicity and judicious use of sampling, is relatively easy to implement in existing commercial systems, has very low overhead, and scales well to the large numbers of columns and large table sizes found in real-world databases. Experiments with a prototype implementation show that the use of CORDS in query optimization can speed up query execution times by an order of magnitude. CORDS can be used in tandem with query feedback systems such as the LEO learning optimizer, leveraging the infrastructure of such systems to correct bad selectivity estimates and ameliorating the poor performance of feedback systems during slow learning phases.",
"title": ""
},
{
"docid": "231be28aafe8f071cb156d6efed900d4",
"text": "The aim of this review was to investigate current evidence for the type and quality of exercise being offered to chronic low back pain (CLBP) patients, within randomised controlled trials (RCTs), and to assess how treatment outcomes are being measured. A two-fold methodological approach was adopted: a methodological assessment identified RCTs of 'medium' or 'high' methodological quality. Exercise quality was subsequently assessed according to the predominant exercise used. Outcome measures were analysed based on current recommendations. Fifty-four relevant RCTs were identified, of which 51 were scored for methodological quality. Sixteen RCTs involving 1730 patients qualified for inclusion in this review based upon their methodological quality, and chronicity of symptoms; exercise had a positive effect in all 16 trials. Twelve out of 16 programmes incorporated strengthening exercise, of which 10 maintained their positive results at follow-up. Supervision and adequate compliance were common aspects of trials. A wide variety of outcome measures were used. Outcome measures did not adequately represent the guidelines for impairment, activity and participation, and impairment measures were over-represented at the expense of others. Despite the variety offered, exercise has a positive effect on CLBP patients, and results are largely maintained at follow-up. Strengthening is a common component of exercise programmes, however, the role of exercise co-interventions must not be overlooked. More high quality trials are needed to accurately assess the role of supervision and follow-up, together with the use of more appropriate outcome measures.",
"title": ""
},
{
"docid": "46ea713c4206d57144350a7871433392",
"text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.",
"title": ""
},
{
"docid": "022f0b83e93b82dfbdf7ae5f5ebe6f8f",
"text": "Most pregnant women at risk of for infection with Plasmodium vivax live in the Asia-Pacific region. However, malaria in pregnancy is not recognised as a priority by many governments, policy makers, and donors in this region. Robust data for the true burden of malaria throughout pregnancy are scarce. Nevertheless, when women have little immunity, each infection is potentially fatal to the mother, fetus, or both. WHO recommendations for the control of malaria in pregnancy are largely based on the situation in Africa, but strategies in the Asia-Pacific region are complicated by heterogeneous transmission settings, coexistence of multidrug-resistant Plasmodium falciparum and Plasmodium vivax parasites, and different vectors. Most knowledge of the epidemiology, effect, treatment, and prevention of malaria in pregnancy in the Asia-Pacific region comes from India, Papua New Guinea, and Thailand. Improved estimates of the morbidity and mortality of malaria in pregnancy are urgently needed. When malaria in pregnancy cannot be prevented, accurate diagnosis and prompt treatment are needed to avert dangerous symptomatic disease and to reduce effects on fetuses.",
"title": ""
},
{
"docid": "58a4b07717c1df99454fd70148cbe15b",
"text": "CONTEXT\nTo improve selective infraspinatus muscle strength and endurance, researchers have recommended selective shoulder external-rotation exercise during rehabilitation or athletic conditioning programs. Although selective strengthening of the infraspinatus muscle is recommended for therapy and training, limited information is available to help clinicians design a selective strengthening program.\n\n\nOBJECTIVE\nTo determine the most effective of 4 shoulder external-rotation exercises for selectively stimulating infraspinatus muscle activity while minimizing the use of the middle trapezius and posterior deltoid muscles.\n\n\nDESIGN\nCross-sectional study.\n\n\nSETTING\nUniversity research laboratory.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nA total of 30 healthy participants (24 men, 6 women; age = 22.6 ± 1.7 years, height = 176.2 ± 4.5 cm, mass = 65.6 ± 7.4 kg) from a university population.\n\n\nINTERVENTION(S)\nThe participants were instructed to perform 4 exercises: (1) prone horizontal abduction with external rotation (PER), (2) side-lying wiper exercise (SWE), (3) side-lying external rotation (SER), and (4) standing external-rotation exercise (STER).\n\n\nMAIN OUTCOME MEASURE(S)\nSurface electromyography signals were recorded from the infraspinatus, middle trapezius, and posterior deltoid muscles. Differences among the exercise positions were tested using a 1-way repeated-measures analysis of variance with Bonferroni adjustment.\n\n\nRESULTS\nThe infraspinatus muscle activity was greater in the SWE (55.98% ± 18.79%) than in the PER (46.14% ± 15.65%), SER (43.38% ± 22.26%), and STER (26.11% ± 15.00%) (F3,87 = 19.97, P < .001). Furthermore, the SWE elicited the least amount of activity in the middle trapezius muscle (F3,87 = 20.15, P < .001). Posterior deltoid muscle activity was similar in the SWE and SER but less than that measured in the PER and STER (F3,87 = 25.10, P < .001).\n\n\nCONCLUSIONS\nThe SWE was superior to the PER, SER, and STER in maximizing infraspinatus activity with the least amount of middle trapezius and posterior deltoid activity. These findings may help clinicians design effective exercise programs.",
"title": ""
},
{
"docid": "cd37d9ab471d99a82ae3ba324695f5ac",
"text": "Recently, a supervised dictionary learning (SDL) approach based on the Hilbert-Schmidt independence criterion (HSIC) has been proposed that learns the dictionary and the corresponding sparse coefficients in a space where the dependency between the data and the corresponding labels is maximized. In this paper, two multiview dictionary learning techniques are proposed based on this HSIC-based SDL. While one of these two techniques learns one dictionary and the corresponding coefficients in the space of fused features in all views, the other learns one dictionary in each view and subsequently fuses the sparse coefficients in the spaces of learned dictionaries. The effectiveness of the proposed multiview learning techniques in using the complementary information of single views is demonstrated in the application of speech emotion recognition (SER). The fully-continuous sub-challenge (FCSC) of the AVEC 2012 dataset is used in two different views: baseline and spectral energy distribution (SED) feature sets. Four dimensional affects, i.e., arousal, expectation, power, and valence are predicted using the proposed multiview methods as the continuous response variables. The results are compared with the single views, AVEC 2012 baseline system, and also other supervised and unsupervised multiview learning approaches in the literature. Using correlation coefficient as the performance measure in predicting the continuous dimensional affects, it is shown that the proposed approach achieves the highest performance among the rivals. The relative performance of the two proposed multiview techniques and their relationship are also discussed. Particularly, it is shown that by providing an additional constraint on the dictionary of one of these approaches, it becomes the same as the other.",
"title": ""
}
] |
scidocsrr
|
15f0d791b31da3a2058f938187bccee1
|
An Empirical Evaluation of Visual Question Answering for Novel Objects
|
[
{
"docid": "bdcd0cad7a2abcb482b1a0755a2e7af4",
"text": "We present a novel attribute learning framework named Hypergraph-based Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the attribute relations in the data. Then the attribute prediction problem is casted as a regularized hypergraph cut problem, in which a collection of attribute projections is jointly learnt from the feature space to a hypergraph embedding space aligned with the attributes. The learned projections directly act as attribute classifiers (linear and kernelized). This formulation leads to a very efficient approach. By considering our model as a multi-graph cut task, our framework can flexibly incorporate other available information, in particular class label. We apply our approach to attribute prediction, Zero-shot and N-shot learning tasks. The results on AWA, USAA and CUB databases demonstrate the value of our methods in comparison with the state-of-the-art approaches.",
"title": ""
},
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
}
] |
[
{
"docid": "43d46b56cdf20c8b8b67831caddfe4db",
"text": "This research addresses a challenging issue that is to recognize spoken Arabic letters, that are three letters of hijaiyah that have indentical pronounciation when pronounced by Indonesian speakers but actually has different makhraj in Arabic, the letters are sa, sya and tsa. The research uses Mel-Frequency Cepstral Coefficients (MFCC) based feature extraction and Artificial Neural Network (ANN) classification method. The result shows the proposed method obtain a good accuracy with an average acuracy is 92.42%, with recognition accuracy each letters (sa, sya, and tsa) prespectivly 92.38%, 93.26% and 91.63%.",
"title": ""
},
{
"docid": "57e7635cb3bda615a1566a883d781149",
"text": "The aim of this work is to propose a fusion procedure based on lidar and camera to solve the pedestrian detection problem in autonomous driving. Current pedestrian detection algorithms have focused on improving the discriminability of 2D features that capture the pedestrian appearance, and on using various classifier architectures. However, less focus on exploiting the 3D structure of object has limited the pedestrian detection performance and practicality. To tackle these issues, a lidar subsystem is applied here in order to extract object structure features and train a SVM classifier, reducing the number of candidate windows that are tested by a state-of-the-art pedestrian appearance classifier. Additionally, we propose a probabilistic framework to fuse pedestrian detection given by both subsystems. With the proposed framework, we have achieved state-of-the-art performance at 20 fps on our own pedestrian dataset gathered in a challenging urban scenario.",
"title": ""
},
{
"docid": "a38105bda456a970b75422df194ecd68",
"text": "Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.",
"title": ""
},
{
"docid": "023fa7d033c94afc8c2864d42c7a4a21",
"text": "This paper presents a literature review on English OCR techniques. English OCR system is compulsory to convert numerous published books of English into editable computer text files. Latest research in this area has been able to grown some new methodologies to overcome the complexity of English writing style. Still these algorithms have not been tested for complete characters of English Alphabet. Hence, a system is required which can handle all classes of English text and identify characters among these classes.",
"title": ""
},
{
"docid": "b42037d4a491c9fb9cd756d11411d95b",
"text": "Control of Induction Motor (IM) is well known to be difficult owing to the fact the mathematical models of IM are highly nonlinear and time variant. The advent of vector control techniques has solved induction motor control problems. The most commonly used controller for the speed control of induction motor is traditional Proportional plus Integral (PI) controller. However, the conventional PI controller has some demerits such as: the high starting overshoot in speed, sensitivity to controller gains and sluggish response due to sudden change in load torque. To overcome these problems, replacement of PI controller by Integral plus Proportional (IP) controller is proposed in this paper. The goal is to determine which control strategy delivers better performance with respect to induction motor’s speed. Performance of these controllers has been verified through simulation using MATLAB/SIMULINK software package for different operating conditions. According to the simulation results, IP controller creates better performance in terms of overshoot, settling time, and steady state error compared to conventional PI controller. This shows the superiority of IP controller over conventional PI controller.",
"title": ""
},
{
"docid": "0ee4861c7864c29746a472c00038ffbd",
"text": "Semantic parsing aims at mapping natural language utterances into structured meaning representations. In this work, we propose a structure-aware neural architecture which decomposes the semantic parsing process into two stages. Given an input utterance, we first generate a rough sketch of its meaning, where low-level information (such as variable names and arguments) is glossed over. Then, we fill in missing details by taking into account the natural language input and the sketch itself. Experimental results on four datasets characteristic of different domains and meaning representations show that our approach consistently improves performance, achieving competitive results despite the use of relatively simple decoders.",
"title": ""
},
{
"docid": "e2867713be67291ee8c25afa3e2d1319",
"text": "In recent years the <i>l</i><sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the <i>l</i><sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of <i>l</i><sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the <i>l</i><sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in <i>O</i>(<i>n</i> log <i>n</i>) time and <i>O</i>(<i>n</i>) memory where <i>n</i> is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that <i>l</i><sub>1</sub>, <sub>∞</sub> leads to better performance than both <i>l</i><sub>2</sub> and <i>l</i><sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.",
"title": ""
},
{
"docid": "f5a7a7b2848bb2d8cd230650c19f74f4",
"text": "A CMOS image sensor capable of imaging and energy harvesting on same focal plane is presented for retinal prosthesis. The energy harvesting and imaging (EHI) active pixel sensor (APS) imager was designed, fabricated, and tested in a standard 0.5 μm CMOS process. It has 54 × 50 array of 21 × 21 μm2 EHI pixels, 10-bit supply boosted (SB) SAR ADC, and charge pump circuits consuming only 14.25 μW from 1.2 V and running at 7.4 frames per second. The supply boosting technique (SBT) is used in an analog signal chain of the EHI imager. Harvested solar energy on focal plane is stored on an off-chip capacitor with the help of a charge pump circuit with better than 70% efficiency. Energy harvesting efficiency of the EHI pixel was measured at different light levels. It was 9.4% while producing 0.41 V open circuit voltage. The EHI imager delivers 3.35 μW of power was delivered to a resistive load at maximum power point operation. The measured pixel array figure of merit (FoM) was 1.32 pW/frame/pixel while imager figure of merit (iFoM) including whole chip power consumption was 696 fJ/pixel/code for the EHI imager.",
"title": ""
},
{
"docid": "83a644ac25c7db156d787629060fb32a",
"text": "In this paper we study face recognition across ages within a real passport photo verification task. First, we propose using the gradient orientation pyramid for this task. Discarding the gradient magnitude and utilizing hierarchical techniques, we found that the new descriptor yields a robust and discriminative representation. With the proposed descriptor, we model face verification as a two-class problem and use a support vector machine as a classifier. The approach is applied to two passport data sets containing more than 1,800 image pairs from each person with large age differences. Although simple, our approach outperforms previously tested Bayesian technique and other descriptors, including the intensity difference and gradient with magnitude. In addition, it works as well as two commercial systems. Second, for the first time, we empirically study how age differences affect recognition performance. Our experiments show that, although the aging process adds difficulty to the recognition task, it does not surpass illumination or expression as a confounding factor.",
"title": ""
},
{
"docid": "81ebd9e963bb7db3eba8b303ba08e8cd",
"text": "In this paper, nonlinear dynamic equations of a wheeled mobile robot (WMR) are described in the state-space form where the parameters are part of the state (angular velocities of the wheels). This representation, known as quasi-linear parameter varying (Quasi-LPV), is useful for control designs based on nonlinear Hinfin approaches. Two nonlinear Hinfin controllers that guarantee induced L2-norm, between input (disturbances) and output signals, bounded by an attenuation level γ are used to control a WMR. These controllers are solved via linear matrix inequalities (LMIs) and algebraic Riccati equation. Experimental results are presented, with a comparative study among these robust control strategies and the standard computed torque, plus proportional-derivative, controller.",
"title": ""
},
{
"docid": "84a1ccd4b32b2b557c3702178ececfc7",
"text": "Embedded systems are at the core of many security-sensitive and safety-critical applications, including automotive, industrial control systems, and critical infrastructures. Existing protection mechanisms against (software-based) malware are inflexible, too complex, expensive, or do not meet real-time requirements.\n We present TyTAN, which, to the best of our knowledge, is the first security architecture for embedded systems that provides (1) hardware-assisted strong isolation of dynamically configurable tasks and (2) real-time guarantees. We implemented TyTAN on the Intel® Siskiyou Peak embedded platform and demonstrate its efficiency and effectiveness through extensive evaluation.",
"title": ""
},
{
"docid": "32b6c7d2ab3aec47af631614b4bb3409",
"text": "In the present study, we developed a novel color scale for visual assessment, conforming to theoretical color changes of a gum, to evaluate masticatoryperformance; moreover, we investigated the reliability and validity of this evaluation method using the color scale. Ten participants (aged 26.30 years) with natural dentition chewed the gum at several chewing strokes. Changes in color were measured using a colorimeter, and then, linearregression expressions that represented changes in gum color were derived. The color scale was developed using these regression expressions. Thirty-two chewed gums were evaluated using colorimeter and were assessed three times using the color scale by six dentists aged 25.27 (mean, 25.8) years, six preclinical dental students aged 21.23 (mean, 22.2) years, and six elderly individuals aged 68.84 (mean, 74.0) years. The intrarater and interrater reliability of evaluations was assessed using intraclass correlation coefficients. Validity of the method compared with a colorimeter was assessed using Spearman's rank correlation coefficient. All intraclass correlation coefficients were > 0.90, and Spearman's rank-correlation coefficients were > 0.95 in all groups. These results indicated that the evaluation method of the color-changeable chewing gum using the newly developed color scale is reliable and valid.",
"title": ""
},
{
"docid": "54ef290e7c8fbc5c1bcd459df9bc4a06",
"text": "Augmenter of Liver Regeneration (ALR) is a sulfhydryl oxidase carrying out fundamental functions facilitating protein disulfide bond formation. In mammals, it also functions as a hepatotrophic growth factor that specifically stimulates hepatocyte proliferation and promotes liver regeneration after liver damage or partial hepatectomy. Whether ALR also plays a role during vertebrate hepatogenesis is unknown. In this work, we investigated the function of alr in liver organogenesis in zebrafish model. We showed that alr is expressed in liver throughout hepatogenesis. Knockdown of alr through morpholino antisense oligonucleotide (MO) leads to suppression of liver outgrowth while overexpression of alr promotes liver growth. The small-liver phenotype in alr morphants results from a reduction of hepatocyte proliferation without affecting apoptosis. When expressed in cultured cells, zebrafish Alr exists as dimer and is localized in mitochondria as well as cytosol but not in nucleus or secreted outside of the cell. Similar to mammalian ALR, zebrafish Alr is a flavin-linked sulfhydryl oxidase and mutation of the conserved cysteine in the CxxC motif abolishes its enzymatic activity. Interestingly, overexpression of either wild type Alr or enzyme-inactive Alr(C131S) mutant promoted liver growth and rescued the liver growth defect of alr morphants. Nevertheless, alr(C131S) is less efficacious in both functions. Meantime, high doses of alr MOs lead to widespread developmental defects and early embryonic death in an alr sequence-dependent manner. These results suggest that alr promotes zebrafish liver outgrowth using mechanisms that are dependent as well as independent of its sulfhydryl oxidase activity. This is the first demonstration of a developmental role of alr in vertebrate. It exemplifies that a low-level sulfhydryl oxidase activity of Alr is essential for embryonic development and cellular survival. The dose-dependent and partial suppression of alr expression through MO-mediated knockdown allows the identification of its late developmental role in vertebrate liver organogenesis.",
"title": ""
},
{
"docid": "83f13e90a0f0997a823d25534b6fc629",
"text": "High-frequency-link (HFL) power conversion systems (PCSs) are attracting more and more attentions in academia and industry for high power density, reduced weight, and low noise without compromising efficiency, cost, and reliability. In HFL PCSs, dual-active-bridge (DAB) isolated bidirectional dc-dc converter (IBDC) serves as the core circuit. This paper gives an overview of DAB-IBDC for HFL PCSs. First, the research necessity and development history are introduced. Second, the research subjects about basic characterization, control strategy, soft-switching solution and variant, as well as hardware design and optimization are reviewed and analyzed. On this basis, several typical application schemes of DAB-IBDC for HPL PCSs are presented in a worldwide scope. Finally, design recommendations and future trends are presented. As the core circuit of HFL PCSs, DAB-IBDC has wide prospects. The large-scale practical application of DAB-IBDC for HFL PCSs is expected with the recent advances in solid-state semiconductors, magnetic and capacitive materials, and microelectronic technologies.",
"title": ""
},
{
"docid": "379a1661e34ac08bebb4e0d8e27406f6",
"text": "Precise segmentation of three-dimensional (3D) magnetic resonance angiography (MRA) images can be a very useful computer aided diagnosis (CAD) tool for clinical routines. Level sets based evolution schemes, which have been shown to be effective and easy to implement for many segmentation applications, are being applied to MRA data sets. In this paper, we present a segmentation scheme for accurately extracting vasculature from MRA images. Our proposed algorithm models capillary action and derives a capillary active contour for segmentation of thin vessels. The algorithm is implemented using the level set method and has been applied successfully on real 3D MRA images. Compared with other state-of-the-art MRA segmentation algorithms, experiments show that our method facilitates more accurate segmentation of thin blood vessels.",
"title": ""
},
{
"docid": "07e93064b1971a32b5c85b251f207348",
"text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.",
"title": ""
},
{
"docid": "7dfb3c8159e7758c414d3e8f92a0bc40",
"text": "The net primary production of the biosphere is consumed largely by microorganisms; whose metabolism creates the trophic base for detrital foodwebs, drives element cycles, and mediates atmospheric composition. Biogeochemical constraints on microbial catabolism, relative to primary production, create reserves of detrital organic carbon in soils and sediments that exceed the carbon content of the atmosphere and biomass. The production of organic matter is an intracellular process that generates thousands of compounds from a small number of precursors drawn from intermediary metabolism. Osmotrophs generate growth substrates from the products of biosynthesis and diagenesis by enzyme-catalyzed reactions that occur largely outside cells. These enzymes, which we define as ecoenzymes, enter the environment by secretion and lysis. Enzyme expression is regulated by environmental signals, but once released from the cell, ecoenzymatic activity is determined by environmental interactions, represented as a kinetic cascade, that lead to multiphasic kinetics and large spatiotemporal variation. At the ecosystem level, these interactions can be viewed as an energy landscape that directs the availability and flow of resources. Ecoenzymatic activity and microbial metabolism are integrated on the basis of resource demand relative to environmental availability. Macroecological studies show that the most widely measured ecoenzymatic activities have a similar stoichiometry for all microbial communities. Ecoenzymatic stoichiometry connects the elemental stoichiometry of microbial biomass and detrital organic matter to microbial nutrient assimilation and growth. We present a model that combines the kinetics of enzyme activity and community growth under conditions of multiple resource limitation with elements of metabolic and ecological stoichiometry theory. This biogeochemical equilibrium model provides a framework for comparative studies of microbial community metabolism, the principal driver of biogeochemical cycles.",
"title": ""
},
{
"docid": "103b784d7cc23663584486fa3ca396bb",
"text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.",
"title": ""
},
{
"docid": "f52a231bb6c1953dad1a6b3fb04f0c53",
"text": "We propose to capture humans’ variable and idiosyncratic sentiment via building personalized sentiment classification models at a group level. Our solution roots in the social comparison theory that humans tend to form groups with others of similar minds and ability, and the cognitive consistency theory that mutual influence inside groups will eventually shape group norms and attitudes, with which group members will all shift to align. We formalize personalized sentiment classification as a multi-task learning problem. In particular, to exploit the clustering property of users’ opinions, we impose a non-parametric Dirichlet Process prior over the personalized models, in which group members share the same customized sentiment model adapted from a global classifier. Extensive experimental evaluations on large collections of Amazon and Yelp reviews confirm the effectiveness of the proposed solution: it outperformed user-independent classification solutions, and several stateof-the-art model adaptation and multi-task learning algorithms.",
"title": ""
},
{
"docid": "1eb415cae9b39655849537cdc007f51f",
"text": "Aesthetics has been the subject of long-standing debates by philosophers and psychologists alike. In psychology, it is generally agreed that aesthetic experience results from an interaction between perception, cognition, and emotion. By experimental means, this triad has been studied in the field of experimental aesthetics, which aims to gain a better understanding of how aesthetic experience relates to fundamental principles of human visual perception and brain processes. Recently, researchers in computer vision have also gained interest in the topic, giving rise to the field of computational aesthetics. With computing hardware and methodology developing at a high pace, the modeling of perceptually relevant aspect of aesthetic stimuli has a huge potential. In this review, we present an overview of recent developments in computational aesthetics and how they relate to experimental studies. In the first part, we cover topics such as the prediction of ratings, style and artist identification as well as computational methods in art history, such as the detection of influences among artists or forgeries. We also describe currently used computational algorithms, such as classifiers and deep neural networks. In the second part, we summarize results from the field of experimental aesthetics and cover several isolated image properties that are believed to have a effect on the aesthetic appeal of visual stimuli. Their relation to each other and to findings from computational aesthetics are discussed. Moreover, we compare the strategies in the two fields of research and suggest that both fields would greatly profit from a joined research effort. We hope to encourage researchers from both disciplines to work more closely together in order to understand visual aesthetics from an integrated point of view.",
"title": ""
}
] |
scidocsrr
|
c4f854c1dc799d9701d8c708a58bf9f6
|
CoReCast: Collision Resilient Broadcasting in Vehicular Networks
|
[
{
"docid": "65e3890edd57a0a6de65b4e38f3cea1c",
"text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.",
"title": ""
},
{
"docid": "8d6da0919363f3c528e9105ee41b0315",
"text": "There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors.\n This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.",
"title": ""
},
{
"docid": "766bc5cee369a729dc310c7134edc36e",
"text": "Spatial multiple access holds the promise to boost the capacity of wireless networks when an access point has multiple antennas. Due to the asynchronous and uncontrolled nature of wireless LANs, conventional MIMO technology does not work efficiently when concurrent transmissions from multiple stations are uncoordinated. In this paper, we present the design and implementation of a crosslayer system, called SAM, that addresses the challenges of enabling spatial multiple access for multiple devices in a random access network like WLAN. SAM uses a chain-decoding technique to reliably recover the channel parameters for each device, and iteratively decode concurrent frames with misaligned symbol timings and frequency offsets. We propose a new MAC protocol, called CCMA, to enable concurrent transmissions by different mobile stations while remaining backward compatible with 802.11. Finally, we implement the PHY and MAC layer of SAM using the Sora high-performance software radio platform. Our evaluation results under real wireless conditions show that SAM can improve network uplink throughput by 70% with two antennas over 802.11.",
"title": ""
}
] |
[
{
"docid": "8d258bac9030dae406fff2c13ae0db43",
"text": "This paper investigates the validity of Kleinberg’s axioms for clustering functions with respect to the quite popular clustering algorithm called k-means.We suggest that the reason why this algorithm does not fit Kleinberg’s axiomatic system stems from missing match between informal intuitions and formal formulations of the axioms. While Kleinberg’s axioms have been discussed heavily in the past, we concentrate here on the case predominantly relevant for k-means algorithm, that is behavior embedded in Euclidean space. We point at some contradictions and counter intuitiveness aspects of this axiomatic set within R that were evidently not discussed so far. Our results suggest that apparently without defining clearly what kind of clusters we expect we will not be able to construct a valid axiomatic system. In particular we look at the shape and the gaps between the clusters. Finally we demonstrate that there exist several ways to reconcile the formulation of the axioms with their intended meaning and that under this reformulation the axioms stop to be contradictory and the real-world k-means algorithm conforms to this axiomatic system.",
"title": ""
},
{
"docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05",
"text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.",
"title": ""
},
{
"docid": "451434f1181c021eb49442d6eb6617c5",
"text": "In this paper, we use variational recurrent neural network to investigate the anomaly detection problem on graph time series. The temporal correlation is modeled by the combination of recurrent neural network (RNN) and variational inference (VI), while the spatial information is captured by the graph convolutional network. In order to incorporate external factors, we use feature extractor to augment the transition of latent variables, which can learn the influence of external factors. With the target function as accumulative ELBO, it is easy to extend this model to on-line method. The experimental study on traffic flow data shows the detection capability of the proposed method.",
"title": ""
},
{
"docid": "1759b81ec84163a829b2dc16a75d3fa6",
"text": "Today's memory technologies, such as DRAM, SRAM, and NAND Flash, are facing major challenges with regard to their continued scaling. For instance, ITRS projects that DRAM cannot scale easily below 40nm as the cost and energy/power are hard -if not impossible- to scale. Fortunately, the international memory technology community has been researching other alternative for more than fifteen years. Apparently, non-volatile resistive memories are promising to replace the today's memories for many reasons such as better scalability, low cost, higher capacity, lower energy, CMOS compatibility, better configurability, etc. This paper discusses and highlights three major aspects of resistive memories, especially memristor based memories: (a) technology and design constraints, (b) architectures, and (c) testing and design-for-test. It shows the opportunities and the challenges.",
"title": ""
},
{
"docid": "5d88d94da2fd8be95ed4258c5ff24f9a",
"text": "Database query processing traditionally relies on three alternative join algorithms: index nested loops join exploits an index on its inner input, merge join exploits sorted inputs, and hash join exploits differences in the sizes of the join inputs. Cost-based query optimization chooses the most appropriate algorithm for each query and for each operation. Unfortunately, mistaken algorithm choices during compile-time query optimization are common yet expensive to investigate and to resolve. Our goal is to end mistaken choices among join algorithms by replacing the three traditional join algorithms with a single one. Like merge join, this new join algorithm exploits sorted inputs. Like hash join, it exploits different input sizes for unsorted inputs. In fact, for unsorted inputs, the cost functions for recursive hash join and for hybrid hash join have guided our search for the new join algorithm. In consequence, the new join algorithm can replace both merge join and hash join in a database management system. The in-memory components of the new join algorithm employ indexes. If the database contains indexes for one (or both) of the inputs, the new join can exploit persistent indexes instead of temporary in-memory indexes. Using database indexes to match input records, the new join algorithm can also replace index nested loops join. Results from an implementation of the core algorithm are reported.",
"title": ""
},
{
"docid": "a97f71e0d5501add1ae08eeee5378045",
"text": "Machine learning is being implemented in bioinformatics and computational biology to solve challenging problems emerged in the analysis and modeling of biological data such as DNA, RNA, and protein. The major problems in classifying protein sequences into existing families/superfamilies are the following: the selection of a suitable sequence encoding method, the extraction of an optimized subset of features that possesses significant discriminatory information, and the adaptation of an appropriate learning algorithm that classifies protein sequences with higher classification accuracy. The accurate classification of protein sequence would be helpful in determining the structure and function of novel protein sequences. In this article, we have proposed a distance-based sequence encoding algorithm that captures the sequence’s statistical characteristics along with amino acids sequence order information. A statistical metric-based feature selection algorithm is then adopted to identify the reduced set of features to represent the original feature space. The performance of the proposed technique is validated using some of the best performing classifiers implemented previously for protein sequence classification. An average classification accuracy of 92% was achieved on the yeast protein sequence data set downloaded from the benchmark UniProtKB database.",
"title": ""
},
{
"docid": "b5c27fa3dbcd917f7cdc815965b22a67",
"text": "Our aim is to provide a pixel-wise instance-level labeling of a monocular image in the context of autonomous driving. We build on recent work [32] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [32] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [15]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [32].",
"title": ""
},
{
"docid": "c8787aa5e3d00452dbf7aaa93c2a4307",
"text": "In recent years, several mobile applications allowed individuals to anonymously share information with friends and contacts, without any persistent identity marker. The functions of these \"tie-based\" anonymity services may be notably different than other social media services. We use semi-structured interviews to qualitatively examine motivations, practices and perceptions in two tie-based anonymity apps: Secret (now defunct, in the US) and Mimi (in China). Among the findings, we show that: (1) while users are more comfortable in self-disclosure, they still have specific practices and strategies to avoid or allow identification; (2) attempts for deidentification of others are prevalent and often elaborate; and (3) participants come to expect both negativity and support in response to posts. Our findings highlight unique opportunities and potential benefits for tie-based anonymity apps, including serving disclosure needs and social probing. Still, challenges for making such applications successful, for example the prevalence of negativity and bullying, are substantial.",
"title": ""
},
{
"docid": "2a13609a94050c4477d94cf0d89cbdd3",
"text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.",
"title": ""
},
{
"docid": "e615ff8da6cdd43357e41aa97df88cc0",
"text": "In recent years, increasing numbers of people have been choosing herbal medicines or products to improve their health conditions, either alone or in combination with others. Herbs are staging a comeback and herbal \"renaissance\" occurs all over the world. According to the World Health Organization, 75% of the world's populations are using herbs for basic healthcare needs. Since the dawn of mankind, in fact, the use of herbs/plants has offered an effective medicine for the treatment of illnesses. Moreover, many conventional/pharmaceutical drugs are derived directly from both nature and traditional remedies distributed around the world. Up to now, the practice of herbal medicine entails the use of more than 53,000 species, and a number of these are facing the threat of extinction due to overexploitation. This paper aims to provide a review of the history and status quo of Chinese, Indian, and Arabic herbal medicines in terms of their significant contribution to the health promotion in present-day over-populated and aging societies. Attention will be focused on the depletion of plant resources on earth in meeting the increasing demand for herbs.",
"title": ""
},
{
"docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8",
"text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.",
"title": ""
},
{
"docid": "dae40fa32526bf965bad70f98eb51bb7",
"text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.",
"title": ""
},
{
"docid": "22293b6953e2b28e1b3dc209649a7286",
"text": "The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.",
"title": ""
},
{
"docid": "924768b271caa9d1ba0cb32ab512f92e",
"text": "Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient.",
"title": ""
},
{
"docid": "b7dd7ad186b55f02724e89f1d29dd285",
"text": "The Web of Linked Data is built upon the idea that data items on the Web are connected by RDF links. Sadly, the reality on the Web shows that Linked Data sources set some RDF links pointing at data items in related data sources, but they clearly do not set RDF links to all data sources that provide related data. In this paper, we present Silk Server, an identity resolution component, which can be used within Linked Data application architectures to augment Web data with additional RDF links. Silk Server is designed to be used with an incoming stream of RDF instances, produced for example by a Linked Data crawler. Silk Server matches the RDF descriptions of incoming instances against a local set of known instances and discovers missing links between them. Based on this assessment, an application can store data about newly discovered instances in its repository or fuse data that is already known about an entity with additional data about the entity from the Web. Afterwards, we report on the results of an experiment in which Silk Server was used to generate RDF links between authors and publications from the Semantic Web Dog Food Corpus and a stream of FOAF profiles that were crawled from the Web.",
"title": ""
},
{
"docid": "9e91f7e57e074ec49879598c13035d70",
"text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.",
"title": ""
},
{
"docid": "86f25f09b801d28ce32f1257a39ddd44",
"text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.",
"title": ""
},
{
"docid": "204f7e7763b447c1aeff1dc6fb639786",
"text": "Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming.",
"title": ""
},
{
"docid": "3a6197322da0e5fe2c2d98a8fcba7a42",
"text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.",
"title": ""
},
{
"docid": "f69ff67f18e9bd7f5c21a4ee160b24c8",
"text": "In this paper, we propose a novel sequential neural network with structure attention to model information diffusion. The proposed model explores both sequential nature of an information diffusion process and structural characteristics of user connection graph. The recurrent neural network framework is employed to model the sequential information. The attention mechanism is incorporated to capture the structural dependency among users, which is defined as the diffusion context of a user. A gating mechanism is further developed to effectively integrate the sequential and structural information. The proposed model is evaluated on the diffusion prediction task. The performances on both synthetic and real datasets demonstrate its superiority over popular baselines and state-of-the-art sequence-based models.",
"title": ""
}
] |
scidocsrr
|
f8b69e6a2235495c30118ea6a82afc40
|
An Intelligent V2I-Based Traffic Management System
|
[
{
"docid": "b37fb73811110ec7a095e98df66f0ee0",
"text": "This paper looks into recent developments and research trends in collision avoidance/warning systems and automation of vehicle longitudinal/lateral control tasks. It is an attempt to provide a bigger picture of the very diverse, detailed and highly multidisciplinary research in this area. Based on diversely selected research, this paper explains the initiatives for automation in different levels of transportation system with a specific emphasis on the vehicle-level automation. Human factor studies and legal issues are analyzed as well as control algorithms. Drivers’ comfort and well being, increased safety, and increased highway capacity are among the most important initiatives counted for automation. However, sometimes these are contradictory requirements. Relying on an analytical survey of the published research, we will try to provide a more clear understanding of the impact of automation/warning systems on each of the above-mentioned factors. The discussion of sensory issues requires a dedicated paper due to its broad range and is not addressed in this paper.",
"title": ""
}
] |
[
{
"docid": "95dbebf3ed125e2a4f0d901f42f09be3",
"text": "Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ~ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz.",
"title": ""
},
{
"docid": "85e43d5afefc791725a05c8e554653bf",
"text": "An analytical model of an ultrawideband range gating radar is developed. The model is used for the system design of a radar for breath activity monitoring having sub-millimeter movement resolution and fulfilling the requirements of the Federal Communications Commission in terms of effective isotropic radiated power. The system study has allowed to define the requirements of the various radar subsystems that have been designed and realized by means of a low cost hybrid technology. The radar has been assembled and some performance factors, such as range and movement resolution, and the receiver conversion factor have been experimentally evaluated and compared with the model predictions. Finally, the radar has been tested for remote breath activity monitoring, showing recorded respiratory signals in very good agreement with those obtained by means of a conventional technique employing a piezoelectric belt.",
"title": ""
},
{
"docid": "044c2d12cebe964f7e3597b5bb8a2e35",
"text": "In this paper, fuzzy logic enhanced generic color model for fire pixel classification is proposed. The model uses YCbCr color space to separate the luminance from the chrominance more effectively than color spaces such as RGB or rgb. Concepts from fuzzy logic are used to replace existing heuristic rules and make the classification more robust in effectively discriminating fire and fire like colored objects. Further discrimination between fire and non fire pixels are achieved by a statistically derived chrominance model which is expressed as a region in the chrominance plane. The performance of the model is tested on two large sets of images; one set contains fire while the other set contains no fire but has regions similar to fire color. The model achieves up to 99.00% correct fire detection rate with a 9.50% false alarm rate.",
"title": ""
},
{
"docid": "f9b11e55be907175d969cd7e76803caf",
"text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.",
"title": ""
},
{
"docid": "41df967b371c9e649a551706c87025a0",
"text": "Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome—a quantum state indicating which error has occurred—by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.",
"title": ""
},
{
"docid": "5f1474036533a4583520ea2526d35daf",
"text": "We motivate the integration of programming by example and natural language programming by developing a system for specifying programs for simple text editing operations based on regular expressions. The programs are described with unconstrained natural language instructions, and providing one or more examples of input/output. We show that natural language allows the system to deduce the correct program much more often and much faster than is possible with the input/output example(s) alone, showing that natural language programming and programming by example can be combined in a way that overcomes the ambiguities that both methods suffer from individually and, at the same time, provides a more natural interface to the user.",
"title": ""
},
{
"docid": "b712bbcad29af3bb8ad210fc9bbeab24",
"text": "Image-based virtual try-on systems for fitting a new in-shop clothes into a person image have attracted increasing research attention, yet is still challenging. A desirable pipeline should not only transform the target clothes into the most fitting shape seamlessly but also preserve well the clothes identity in the generated image, that is, the key characteristics (e.g. texture, logo, embroidery) that depict the original clothes. However, previous image-conditioned generation works fail to meet these critical requirements towards the plausible virtual try-on performance since they fail to handle large spatial misalignment between the input image and target clothes. Prior work explicitly tackled spatial deformation using shape context matching, but failed to preserve clothing details due to its coarse-to-fine strategy. In this work, we propose a new fully-learnable Characteristic-Preserving Virtual Try-On Network (CP-VTON) for addressing all real-world challenges in this task. First, CP-VTON learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a new Geometric Matching Module (GMM) rather than computing correspondences of interest points as prior works did. Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered image to ensure smoothness. Extensive experiments on a fashion dataset demonstrate our CP-VTON achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively.",
"title": ""
},
{
"docid": "00828ab21f8bb19a5621d6964636425e",
"text": "Deep neural networks (DNN) have achieved huge practical suc cess in recent years. However, its theoretical properties (in particular genera lization ability) are not yet very clear, since existing error bounds for neural networks cannot be directly used to explain the statistical behaviors of practically adopte d DNN models (which are multi-class in their nature and may contain convolutional l ayers). To tackle the challenge, we derive a new margin bound for DNN in this paper, in which the expected0-1 error of a DNN model is upper bounded by its empirical margin e rror plus a Rademacher Average based capacity term. This new boun d is very general and is consistent with the empirical behaviors of DNN models ob erved in our experiments. According to the new bound, minimizing the emp irical margin error can effectively improve the test performance of DNN. We ther efore propose large margin DNN algorithms, which impose margin penalty terms to the cross entropy loss of DNN, so as to reduce the margin error during the traini ng process. Experimental results show that the proposed algorithms can achiev e s gnificantly smaller empirical margin errors, as well as better test performance s than the standard DNN algorithm.",
"title": ""
},
{
"docid": "d0e977ab137cd004420bda28bd0b11be",
"text": "This study investigates the roles of cohesion and coherence in evaluations of essay quality. Cohesion generally has a facilitative effect on text comprehension and is assumed to be related to essay coherence. By contrast, recent studies of essay writing have demonstrated that computational indices of cohesion are not predictive of evaluations of writing quality. This study investigates expert ratings of individual text features, including coherence, in order to examine their relation to evaluations of holistic essay quality. The results suggest that coherence is an important attribute of overall essay quality, but that expert raters evaluate coherence based on the absence of cohesive cues in the essays rather than their presence. This finding has important implications for text understanding and the role of coherence in writing quality.",
"title": ""
},
{
"docid": "bb8e11200fe68989783f1f04367f3ebb",
"text": "Amyotrophic lateral sclerosis (ALS) is a common fatal motor neuron disease that assails the nerve cells in the brain. As the nervous system controls the muscle activity, the electromyography (EMG) signals can be viewed and examined in order to detect the vital features of the ALS disease in individuals. In this paper, the discrete wavelet transform (DWT) based features, which are extracted from a frame of EMG data, are introduced to classify the normal person and the ALS patients. From each frame of EMG data, instead of using a large number of DWT coefficients, the DWT coefficients with higher values as well as their mean and maxima are proposed to be used, which drastically reduces the feature dimension. It is shown that the proposed feature vector offers a high within class compactness and between class separations. For the purpose of classification, the K-nearest neighborhood classifier is employed. In order to demonstrate the classification performance, an EMG database consisted of 5 normal subjects and 5 ALS patients is considered and it is found that the proposed method is capable of distinctly separating the ALS patients from the normal persons.",
"title": ""
},
{
"docid": "a271371ba28be10b67e31ecca6f3aa88",
"text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.",
"title": ""
},
{
"docid": "89c3f876494506aceeb9b9ccf0da0ff1",
"text": "With the prevalence of accessible depth sensors, dynamic human body skeletons have attracted much attention as a robust modality for action recognition. Previous methods model skeletons based on RNN or CNN, which has limited expressive power for irregular joints. In this paper, we represent skeletons naturally on graphs and propose a generalized graph convolutional neural networks (GGCN) for skeleton-based action recognition, aiming to capture space-time variation via spectral graph theory. In particular, we construct a generalized graph over consecutive frames, where each joint is not only connected to its neighboring joints in the same frame strongly or weakly, but also linked with relevant joints in the previous and subsequent frames. The generalized graphs are then fed into GGCN along with the coordinate matrix of the skeleton sequence for feature learning, where we deploy high-order and fast Chebyshev approximation of spectral graph convolution in the network. Experiments show that we achieve the state-of-the-art performance on the widely used NTU RGB+D, UT-Kinect and SYSU 3D datasets.",
"title": ""
},
{
"docid": "df89dc3a36ac18fd880a7249022b4b2c",
"text": "ConvNets and Imagenet have driven the recent success of deep learning for image classification. However, the marked slowdown in performance improvement, the recent studies on the lack of robustness of neural networks to adversarial examples and their tendency to exhibit undesirable biases questioned the reliability and sustained development of these methods. This work investigates these questions from the perspective of the end-user by using human subject studies and explanations. We experimentally demonstrate that the accuracy and robustness of ConvNets measured on Imagenet are underestimated. We show that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user and we introduce a novel tool for uncovering the undesirable biases learned by a model. These contributions also show that explanations are a promising tool for improving our understanding of ConvNets’ predictions and for designing more reliable models1.",
"title": ""
},
{
"docid": "b610e9bef08ef2c133a02e887b89b196",
"text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.",
"title": ""
},
{
"docid": "921c7a6c3902434b250548e573816978",
"text": "Energy harvesting based on tethered kites makes use of the advantage, that these airborne wind energy systems are able to exploit higher wind speeds at higher altitudes. The setup, considered in this paper, is based on the pumping cycle, which generates energy by winching out at high tether forces, driving an electrical generator while flying crosswind and winching in at a stationary neutral position, thus leaving a net amount of generated energy. The economic operation of such airborne wind energy plants demands for a reliable control system allowing for a complete autonomous operation of cycles. This task involves the flight control of the kite as well as the operation of a winch for the tether. The focus of this paper is put on the flight control, which implements an accurate direction control towards target points allowing for eight-down pattern flights. In addition, efficient winch control strategies are provided. The paper summarises a simple comprehensible model with equations of motion in order to motivate the approach of the control system design. After an extended overview on the control system, the flight controller parts are discussed in detail. Subsequently, the winch strategies based on an optimisation scheme are presented. In order to demonstrate the real world functionality of the presented algorithms, flight data from a fully automated pumping-cycle operation of a small-scale prototype setup based on a 30 m2 kite and a 50 kW electrical motor/generator is given.",
"title": ""
},
{
"docid": "dee37431ec24aae3fd8c9e43a4f9f93e",
"text": "We present a new feature representation method for scene text recognition problem, particularly focusing on improving scene character recognition. Many existing methods rely on Histogram of Oriented Gradient (HOG) or part-based models, which do not span the feature space well for characters in natural scene images, especially given large variation in fonts with cluttered backgrounds. In this work, we propose a discriminative feature pooling method that automatically learns the most informative sub-regions of each scene character within a multi-class classification framework, whereas each sub-region seamlessly integrates a set of low-level image features through integral images. The proposed feature representation is compact, computationally efficient, and able to effectively model distinctive spatial structures of each individual character class. Extensive experiments conducted on challenging datasets (Chars74K, ICDAR'03, ICDAR'11, SVT) show that our method significantly outperforms existing methods on scene character classification and scene text recognition tasks.",
"title": ""
},
{
"docid": "f309d2f237f4451bea75767f53277143",
"text": "Most problems in computational geometry are algebraic. A general approach to address nonrobustness in such problems is Exact Geometric Computation (EGC). There are now general libraries that support EGC for the general programmer (e.g., Core Library, LEDA Real). Many applications require non-algebraic functions as well. In this paper, we describe how to provide non-algebraic functions in the context of other EGC capabilities. We implemented a multiprecision hypergeometric series package which can be used to evaluate common elementary math functions to an arbitrary precision. This can be achieved relatively easily using the Core Library which supports a guaranteed precision level of accuracy. We address several issues of efficiency in such a hypergeometric package: automatic error analysis, argument reduction, preprocessing of hypergeometric parameters, and precomputed constants. Some preliminary experimental results are reported.",
"title": ""
},
{
"docid": "e4e60c0ea93a2297636c265c00277bb1",
"text": "Event studies, which look at stock market reactions to assess corporate business events, represent a relatively new research approach in the information systems field. In this paper we present a systematic review of thirty event studies related to information technology. After a brief discussion of each of the papers included in our review, we call attention to several limitations of the published studies and propose possible future research avenues.",
"title": ""
},
{
"docid": "6a922e97c878c4d1769e1101f5026cf9",
"text": "Human activities create waste, and it is the way these wastes are handled, stored, collected and disposed of, which can pose risks to the environment and to public health. Where intense human activities concentrate, such as in urban centres, appropriate and safe solid waste management (SWM) are of utmost importance to allow healthy living conditions for the population. This fact has been acknowledged by most governments, however many municipalities are struggling to provide even the most basic services. Typically one to two thirds of the solid waste generated is not collected (World Resources Institute, et al., 1996). As a result, the uncollected waste, which is often also mixed with human and animal excreta, is dumped indiscriminately in the streets and in drains, so contributing to flooding, breeding of insect and rodent vectors and the spread of diseases (UNEP-IETC, 1996). Most of the municipal solid waste in low-income Asian countries which is collected is dumped on land in a more or less uncontrolled manner. Such inadequate waste disposal creates serious environmental problems that affect health of humans and animals and cause serious economic and other welfare losses. The environmental degradation caused by inadequate disposal of waste can be expressed by the contamination of surface and ground water through leachate, soil contamination through direct waste contact or leachate, air pollution by burning of wastes, spreading of diseases by different vectors like birds, insects and rodents, or uncontrolled release of methan by anaerobic decomposition of waste",
"title": ""
},
{
"docid": "33b129cb569c979c81c0cb1c0a5b9594",
"text": "During animal development, accurate control of tissue specification and growth are critical to generate organisms of reproducible shape and size. The eye-antennal disc epithelium of Drosophila is a powerful model system to identify the signaling pathway and transcription factors that mediate and coordinate these processes. We show here that the Yorkie (Yki) pathway plays a major role in tissue specification within the developing fly eye disc epithelium at a time when organ primordia and regional identity domains are specified. RNAi-mediated inactivation of Yki, or its partner Scalloped (Sd), or increased activity of the upstream negative regulators of Yki cause a dramatic reorganization of the eye disc fate map leading to specification of the entire disc epithelium into retina. On the contrary, constitutive expression of Yki suppresses eye formation in a Sd-dependent fashion. We also show that knockdown of the transcription factor Homothorax (Hth), known to partner Yki in some developmental contexts, also induces an ectopic retina domain, that Yki and Scalloped regulate Hth expression, and that the gain-of-function activity of Yki is partially dependent on Hth. Our results support a critical role for Yki- and its partners Sd and Hth--in shaping the fate map of the eye epithelium independently of its universal role as a regulator of proliferation and survival.",
"title": ""
}
] |
scidocsrr
|
0d36005ae1f42777e8beebfcb68133c6
|
Modeling Radiometric Uncertainty for Vision with Tone-Mapped Color Images
|
[
{
"docid": "aa74720aa2d191b9eb25104ee3a33b1e",
"text": "We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene.",
"title": ""
}
] |
[
{
"docid": "9a48ed0b691123ce3f08f673dc90fea7",
"text": "IoT is an expanding network of physical devices that are linked with different types of sensors and with the help of connectivity to the internet, they are able to exchange data. Through IoT, internet has now extended its roots to almost every possible thing present around us and is no more limited to our personal computers and mobile phones. Safety, the elementary concern of any project, has not been left untouched by IoT. Gas Leakages in open or closed areas can prove to be dangerous and lethal. The traditional Gas Leakage Detector Systems though have great precision, fail to acknowledge a few factors in the field of alerting the people about the leakage. Therefore we have used the IoT technology to make a Gas Leakage Detector having Smart Alerting techniques involving calling, sending text message and an e-mail to the concerned authority and an ability to predict hazardous situation so that people could be made aware in advance by performing data analytics on sensor readings.",
"title": ""
},
{
"docid": "e444dcc97882005658aca256991e816e",
"text": "The terms superordinate, hyponym, and subordinate designate the hierarchical taxonomic relationship of words. They also represent categories and concepts. This relationship is a subject of interest for anthropology, cognitive psychology, psycholinguistics, linguistic semantics, and cognitive linguistics. Taxonomic hierarchies are essentially classificatory systems, and they are supposed to reflect the way that speakers of a language categorize the world of experience. A well-formed taxonomy offers an orderly and efficient set of categories at different levels of specificity (Cruse 2000:180). However, the terms and levels of taxonomic hierarchy used in each discipline vary. This makes it difficult to carry out cross-disciplinary readings on the hierarchical taxonomy of words or categories, which act as an interface in these cognitive-based cross-disciplinary ventures. Not only words— terms and concepts differ but often the nature of the problem is compounded as some terms refer to differing word classes, categories and concepts at the same time. Moreover, the lexical relationship of terms among these lexical hierarchies is far from clear. As a result two lines of thinking can be drawn from the literature: (1) technical terms coined for the hierarchical relationship of words are conflicting and do not reflect reality or environment, and (2) the relationship among these hierarchies of word levels and the underlying principles followed to explain them are uncertain except that of inclusion.",
"title": ""
},
{
"docid": "cedfccb3fd6433e695082594cf0beb45",
"text": "Among the different existing cryptographic file systems, EncFS has a unique feature that makes it attractive for backup setups involving untrusted (cloud) storage. It is a file-based overlay file system in normal operation (i.e., it maintains a directory hierarchy by storing encrypted representations of files and folders in a specific source folder), but its reverse mode allows to reverse this process: Users can mount deterministic, encrypted views of their local, unencrypted files on the fly, allowing synchronization to untrusted storage using standard tools like rsync without having to store encrypted representations on the local hard drive. So far, EncFS is a single-user solution: All files of a folder are encrypted using the same, static key; file access rights are passed through to the encrypted representation, but not otherwise considered. In this paper, we work out how multi-user support can be integrated into EncFS and its reverse mode in particular. We present an extension that a) stores individual files' owner/group information and permissions in a confidential and authenticated manner, and b) cryptographically enforces thereby specified read rights. For this, we introduce user-specific keys and an appropriate, automatic key management. Given a user's key and a complete encrypted source directory, the extension allows access to exactly those files the user is authorized for according to the corresponding owner/group/permissions information. Just like EncFS, our extension depends only on symmetric cryptographic primitives.",
"title": ""
},
{
"docid": "9b013f0574cc8fd4139a94aa5cf84613",
"text": "Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for rewarddesign) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD’s performance. The new method improves UCT’s performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.",
"title": ""
},
{
"docid": "64bcd606e039f731aec7cc4722a4d3cb",
"text": "Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partialinformation setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.",
"title": ""
},
{
"docid": "bbfe7693d45e3343b30fad7f6c9279d8",
"text": "Vernier permanent magnet (VPM) machines can be utilized for direct drive applications by virtue of their high torque density and high efficiency. The purpose of this paper is to develop a general design guideline for split-slot low-speed VPM machines, generalize the operation principle, and illustrate the relationship among the numbers of the stator slots, coil poles, permanent magnet (PM) pole pairs, thereby laying a solid foundation for the design of various kinds of VPM machines. Depending on the PM locations, three newly designed VPM machines are reported in this paper and they are referred to as 1) rotor-PM Vernier machine, 2) stator-tooth-PM Vernier machine, and 3) stator-yoke-PM Vernier machine. The back-electromotive force (back-EMF) waveforms, static torque, and air-gap field distribution are predicted using time-stepping finite element method (TS-FEM). The performances of the proposed VPM machines are compared and reported.",
"title": ""
},
{
"docid": "3dec063e5ee45d31b419c60bad6ce77c",
"text": "Mondor's disease (MD) is a rare condition, which is considered a thrombophlebitis of the subcutaneous veins. It commonly occurs on the anterolateral thoracoabdominal wall, but it can also occur on the penis, groin, antecubital fossa and posterior cervical region. The clinical features are a sudden and typically asymptomatic onset of a cord-like induration, although some patients report a feeling of 'strain'. It is a self-limiting process that lasts a short period of time, which may be the reason why there are few reports about its diagnosis and treatment. Its pathogenesis has remained unclear, because of the lack of methods to reliably differentiate between veins and lymphatic vessels. Immunohistochemical staining for CD31 and D240 has been identified recently as the best method to distinguish small veins from lymphatic vessels, making it a valuable technique in diagnosing not only MD, but also many other diseases in which veins or lymphatic vessels are affected. MD has been associated with several systemic diseases such as breast cancer and hypercoagulability states, thus laboratory studies are recommended to exclude any possible systemic disorders. As this condition is usually a benign and self-limiting process, vigorous treatment is only recommended when the process is symptomatic or recurrent.",
"title": ""
},
{
"docid": "980a9d76136ffa057865d2bb425dc8e7",
"text": "Research in digital watermarking is mature. Several software implementations of watermarking algorithms are described in the literature, but few attempts have been made to describe hardware implementations. The ultimate objective of the research presented in this paper was to develop low-power, highperformance, real-time, reliable and secure watermarking systems, which can be achieved through hardware implementations. In this paper, we discuss the development of a very-large-scale integration architecture for a high-performance watermarking chip that can perform both invisible robust and invisible fragile image watermarking in the spatial domain. We prototyped the watermarking chip in two ways: (i) by using a Xilinx field-programmable gate array and (ii) by building a custom integrated circuit. To the best of our knowledge, this prototype is the first watermarking chip with both invisible robust and invisible fragile watermarking capabilities.",
"title": ""
},
{
"docid": "b7222f86da6f1e44bd1dca88eb59dc4b",
"text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.",
"title": ""
},
{
"docid": "581c4d11e59dc17e0cb6ecf5fa7bea93",
"text": "This paper describes the three methodologies used by CALCE in their winning entry for the IEEE 2012 PHM Data Challenge competition. An experimental data set from seventeen ball bearings was provided by the FEMTO-ST Institute. The data set consisted of data from six bearings for algorithm training and data from eleven bearings for testing. The authors developed prognostic algorithms based on the data from the training bearings to estimate the remaining useful life of the test bearings. Three methodologies are presented in this paper. Result accuracies of the winning methodology are presented.",
"title": ""
},
{
"docid": "f414db165723f75a4991035d4dd2055d",
"text": "In data centers, caches work both to provide low IO latencies and to reduce the load on the back-end network and storage. But they are not designed for multi-tenancy; system-level caches today cannot be configured to match tenant or provider objectives. Exacerbating the problem is the increasing number of un-coordinated caches on the IO data plane. The lack of global visibility on the control plane to coordinate this distributed set of caches leads to inefficiencies, increasing cloud provider cost.\n We present Moirai, a tenant- and workload-aware system that allows data center providers to control their distributed caching infrastructure. Moirai can help ease the management of the cache infrastructure and achieve various objectives, such as improving overall resource utilization or providing tenant isolation and QoS guarantees, as we show through several use cases. A key benefit of Moirai is that it is transparent to applications or VMs deployed in data centers. Our prototype runs unmodified OSes and databases, providing immediate benefit to existing applications.",
"title": ""
},
{
"docid": "e96f455aa2c82d358eb94c72d93c8b03",
"text": "OBJECTIVE\nTo evaluate the effects of mirror therapy on upper-extremity motor recovery, spasticity, and hand-related functioning of inpatients with subacute stroke.\n\n\nDESIGN\nRandomized, controlled, assessor-blinded, 4-week trial, with follow-up at 6 months.\n\n\nSETTING\nRehabilitation education and research hospital.\n\n\nPARTICIPANTS\nA total of 40 inpatients with stroke (mean age, 63.2y), all within 12 months poststroke.\n\n\nINTERVENTIONS\nThirty minutes of mirror therapy program a day consisting of wrist and finger flexion and extension movements or sham therapy in addition to conventional stroke rehabilitation program, 5 days a week, 2 to 5 hours a day, for 4 weeks.\n\n\nMAIN OUTCOME MEASURES\nThe Brunnstrom stages of motor recovery, spasticity assessed by the Modified Ashworth Scale (MAS), and hand-related functioning (self-care items of the FIM instrument).\n\n\nRESULTS\nThe scores of the Brunnstrom stages for the hand and upper extremity and the FIM self-care score improved more in the mirror group than in the control group after 4 weeks of treatment (by 0.83, 0.89, and 4.10, respectively; all P<.01) and at the 6-month follow-up (by 0.16, 0.43, and 2.34, respectively; all P<.05). No significant differences were found between the groups for the MAS.\n\n\nCONCLUSIONS\nIn our group of subacute stroke patients, hand functioning improved more after mirror therapy in addition to a conventional rehabilitation program compared with a control treatment immediately after 4 weeks of treatment and at the 6-month follow-up, whereas mirror therapy did not affect spasticity.",
"title": ""
},
{
"docid": "3b23ed2330401a53f45ea5056e7d4be3",
"text": "coherent exposition difficult. Almost every war, however, has notable instances where some cipher message solution foils a plot or wins a battle. Here it is easy to connect the cryptographic or cryptanalytic technicalities with particular historical events, but a book that relies too much on such instances becomes in effect no more than an adventure story anthology. So it is no surprise that there are few general surveys of the history of cryptography and fewer good ones. The rule of thumb seems to be one new book every thirty years. In 1902 and 1906 Alois Meister published his immensely scholarly Die Anfänge der Modernen Diplomatischen Geheimschrift and Die Geheimschrift im Dienste der Päpstlichen Kurie, reproducing and summarizing texts relevant to cryptography in the late medieval and early modern periods. The readership cannot have been large. At the opposite extreme of readability was the 1939 Secret and Urgent: The Story of Codes and Ciphers by the journalist and naval affairs commentator Fletcher Pratt. The book presented a breezy series of thrilling anecdotal historical episodes involving ciphers and code-breaking exploits. Each episode came complete with Sunday supplement-style character sketches and just the",
"title": ""
},
{
"docid": "2a40501256bdaa11ab9b4c0c9f04d45b",
"text": "In recent years, deep learning has achieved great success in many computer vision applications. Convolutional neural networks (CNNs) have lately emerged as a major approach to image classification. Most research on CNNs thus far has focused on developing architectures such as the Inception and residual networks. The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself. In this paper, we introduce a convolution unit called the active convolution unit (ACU). A new convolution has no fixed shape, because of which we can define any form of convolution. Its shape can be learned through backpropagation during training. Our proposed unit has a few advantages. First, the ACU is a generalization of convolution, it can define not only all conventional convolutions, but also convolutions with fractional pixel coordinates. We can freely change the shape of the convolution, which provides greater freedom to form CNN structures. Second, the shape of the convolution is learned while training and there is no need to tune it by hand. Third, the ACU can learn better than a conventional unit, where we obtained the improvement simply by changing the conventional convolution to an ACU. We tested our proposed method on plain and residual networks, and the results showed significant improvement using our method on various datasets and architectures in comparison with the baseline. Code is available at https://github.com/jyh2986/Active-Convolution.",
"title": ""
},
{
"docid": "63c3e74f2d26dde9a0cdbd7161348197",
"text": "We assessed brain activation of nine normal right-handed volunteers in a positron emission tomography study designed to differentiate the functional anatomy of the two major components of auditory comprehension of language, namely phonological versus lexico-semantic processing. The activation paradigm included three tasks. In the reference task, subjects were asked to detect rising pitch within a series of pure tones. In the phonological task, they had to monitor the sequential phonemic organization of non-words. In the lexico-semantic task, they monitored concrete nouns according to semantic criteria. We found highly significant and different patterns of activation. Phonological processing was associated with activation in the left superior temporal gyrus (mainly Wernicke's area) and, to a lesser extent, in Broca's area and in the right superior temporal regions. Lexico-semantic processing was associated with activity in the left middle and inferior temporal gyri, the left inferior parietal region and the left superior prefrontal region, in addition to the superior temporal regions. A comparison of the pattern of activation obtained with the lexico-semantic task to that obtained with the phonological task was made in order to account for the contribution of lower stage components to semantic processing. No difference in activation was found in Broca's area and superior temporal areas which suggests that these areas are activated by the phonological component of both tasks, but activation was noted in the temporal, parietal and frontal multi-modal association areas. These constitute parts of a large network that represent the specific anatomic substrate of the lexico-semantic processing of language.",
"title": ""
},
{
"docid": "0cc02773fd194c42071f8500a0c88261",
"text": "Neuroscientific and psychological data suggest a close link between affordance and mirror systems in the brain. However, we still lack a full understanding of both the individual systems and their interactions. Here, we propose that the architecture and functioning of the two systems is best understood in terms of two challenges faced by complex organisms, namely: (a) the need to select among multiple affordances and possible actions dependent on context and high-level goals and (b) the exploitation of the advantages deriving from a hierarchical organisation of behaviour based on actions and action-goals. We first review and analyse the psychological and neuroscientific literature on the mechanisms and processes organisms use to deal with these challenges. We then analyse existing computational models thereof. Finally we present the design of a computational framework that integrates the reviewed knowledge. The framework can be used both as a theoretical guidance to interpret empirical data and design new experiments, and to design computational models addressing specific problems debated in the literature.",
"title": ""
},
{
"docid": "7569c7f3983c608151fb5bbb093b3293",
"text": "A unilateral probe-fed rectangular dielectric resonator antenna (DRA) with a very small ground plane is investigated. The small ground plane simultaneously works as an excitation patch that excites the fundamental TE111 mode of the DRA, which is an equivalent magnetic dipole. By combining this equivalent magnetic dipole and the electric dipole of the probe, a lateral radiation pattern can be obtained. This complementary antenna has the same E- and H-Planes patterns with low back radiation. Moreover, the cardioid-shaped pattern can be easily steered in the horizontal plane by changing the angular position of the patch (ground). To verify the idea, a prototype operating in 3.5-GHz long term evolution band (3.4–3.6 GHz) was fabricated and measured, with reasonable agreement between the measured and simulated results obtained. It is found that the measured 15-dB front-to-back-ratio bandwidth is 10.9%.",
"title": ""
},
{
"docid": "04fe2706a8da54365e4125867613748b",
"text": "We consider a sequence of multinomial data for which the probabilities associated with the categories are subject to abrupt changes of unknown magnitudes at unknown locations. When the number of categories is comparable to or even larger than the number of subjects allocated to these categories, conventional methods such as the classical Pearson’s chi-squared test and the deviance test may not work well. Motivated by high-dimensional homogeneity tests, we propose a novel change-point detection procedure that allows the number of categories to tend to infinity. The null distribution of our test statistic is asymptotically normal and the test performs well with finite samples. The number of change-points is determined by minimizing a penalized objective function based on segmentation, and the locations of the change-points are estimated by minimizing the objective function with the dynamic programming algorithm. Under some mild conditions, the consistency of the estimators of multiple change-points is established. Simulation studies show that the proposed method performs satisfactorily for identifying change-points in terms of power and estimation accuracy, and it is illustrated with an analysis of a real data set.",
"title": ""
},
{
"docid": "d23b1cdbf4e8984eb5ae373318d94431",
"text": "Search engines have greatly influenced the way people access information on the Internet, as such engines provide the preferred entry point to billions of pages on the Web. Therefore, highly ranked Web pages generally have higher visibility to people and pushing the ranking higher has become the top priority for Web masters. As a matter of fact, Search Engine Optimization (SEO) has became a sizeable business that attempts to improve their clients’ ranking. Still, the lack of ways to validate SEO’s methods has created numerous myths and fallacies associated with ranking algorithms.\n In this article, we focus on two ranking algorithms, Google’s and Bing’s, and design, implement, and evaluate a ranking system to systematically validate assumptions others have made about these popular ranking algorithms. We demonstrate that linear learning models, coupled with a recursive partitioning ranking scheme, are capable of predicting ranking results with high accuracy. As an example, we manage to correctly predict 7 out of the top 10 pages for 78% of evaluated keywords. Moreover, for content-only ranking, our system can correctly predict 9 or more pages out of the top 10 ones for 77% of search terms. We show how our ranking system can be used to reveal the relative importance of ranking features in a search engine’s ranking function, provide guidelines for SEOs and Web masters to optimize their Web pages, validate or disprove new ranking features, and evaluate search engine ranking results for possible ranking bias.",
"title": ""
},
{
"docid": "398f53713c90fb53ad28f31bbbec49df",
"text": "Gaussian process (GP) models are very popular for machine learning and regression and they are widely used to account for spatial or temporal relationships between multivariate random variables. In this paper, we propose a general formulation of underdetermined source separation as a problem involving GP regression. The advantage of the proposed unified view is first to describe the different underdetermined source separation problems as particular cases of a more general framework. Second, it provides a flexible means to include a variety of prior information concerning the sources such as smoothness, local stationarity or periodicity through the use of adequate covariance functions. Third, given the model, it provides an optimal solution in the minimum mean squared error (MMSE) sense to the source separation problem. In order to make the GP models tractable for very large signals, we introduce framing as a GP approximation and we show that computations for regularly sampled and locally stationary GPs can be done very efficiently in the frequency domain. These findings establish a deep connection between GP and nonnegative tensor factorizations (NTF) with the Itakura-Saito distance and lead to effective methods to learn GP hyperparameters for very large and regularly sampled signals.",
"title": ""
}
] |
scidocsrr
|
6cffefb378f6439dba7c1228059ef497
|
A Comparison of Sequence-to-Sequence Models for Speech Recognition
|
[
{
"docid": "afee419227629f8044b5eb0addd65ce3",
"text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.",
"title": ""
},
{
"docid": "e77c7b9c486f895167c54b6724e9e3c8",
"text": "Many machine learning tasks can be expressed as the transformation—or transduction—of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since finding the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that returns a distribution over output sequences of all possible lengths and alignments for any input sequence. Experimental results are provided on the TIMIT speech corpus.",
"title": ""
},
{
"docid": "e73060d189e9a4f4fd7b93e1cab22955",
"text": "We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques that further improve performance of LSTM RNN acoustic models for large vocabulary speech recognition. We show that frame stacking and reduced frame rate lead to more accurate models and faster decoding. CD phone modeling leads to further improvements. We also present initial results for LSTM RNN models outputting words directly.",
"title": ""
}
] |
[
{
"docid": "a5e23ca50545378ef32ed866b97fd418",
"text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.",
"title": ""
},
{
"docid": "477e5be6b2727a5d6f0a976c4c64c960",
"text": "Glaucoma is the second leading cause of blindness all over the world, with approximately 60 million cases reported worldwide in 2010. If undiagnosed in time, glaucoma causes irreversible damage to the optic nerve leading to blindness. The optic nerve head examination, which involves measurement of cup-todisc ratio, is considered one of the most valuable methods of structural diagnosis of the disease. Estimation of cup-to-disc ratio requires segmentation of optic disc and optic cup on eye fundus images and can be performed by modern computer vision algorithms. This work presents universal approach for automatic optic disc and cup segmentation, which is based on deep learning, namely, modification of U-Net convolutional neural network. Our experiments include comparison with the best known methods on publicly available databases DRIONS-DB, RIM-ONE v.3, DRISHTI-GS. For both optic disc and cup segmentation, our method achieves quality comparable to current state-of-the-art methods, outperforming them in terms of the prediction time.",
"title": ""
},
{
"docid": "f35a1201362e22bae2ff377da9f2c122",
"text": "We examined the impact of repeated testing and repeated studying on long-term learning. In Experiment 1, we replicated Karpicke and Roediger's (2008) influential results showing that once information can be recalled, repeated testing on that information enhances learning, whereas restudying that information does not. We then examined whether the apparent ineffectiveness of restudying might be attributable to the spacing differences between items that were inherent in the between-subjects design employed by Karpicke and Roediger. When we controlled for these spacing differences by manipulating the various learning conditions within subjects in Experiment 2, we found that both repeated testing and restudying improved learning, and that learners' awareness of the relative mnemonic benefits of these strategies was enhanced. These findings contribute to understanding how two important factors in learning-test-induced retrieval processes and spacing-can interact, and they illustrate that such interactions can play out differently in between-subjects and within-subjects experimental designs.",
"title": ""
},
{
"docid": "07f7a4fe69f6c4a1180cc3ca444a363a",
"text": "With the popularization of IoT (Internet of Things) devices and the continuous development of machine learning algorithms, learning-based IoT malicious traffic detection technologies have gradually matured. However, learning-based IoT traffic detection models are usually very vulnerable to adversarial samples. There is a great need for an automated testing framework to help security analysts to detect errors in learning-based IoT traffic detection systems. At present, most methods for generating adversarial samples require training parameters of known models and are only applicable to image data. To address the challenge, we propose a testing framework for learning-based IoT traffic detection systems, TLTD. By introducing genetic algorithms and some technical improvements, TLTD can generate adversarial samples for IoT traffic detection systems and can perform a black-box test on the systems.",
"title": ""
},
{
"docid": "12819e1ad6ca9b546e39ed286fe54d23",
"text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.",
"title": ""
},
{
"docid": "58d64f5c8c9d953b3c2df0a029eab864",
"text": "We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles (Goldberg and Nivre, 2013) instead of cross-entropy minimization. This form of training, which accounts for model predictions at training time rather than assuming an error-free action history, improves parsing accuracies for both English and Chinese, obtaining very strong results for both languages. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural-network dependency parser.",
"title": ""
},
{
"docid": "9152c55c35305bcaf56bc586e87f1575",
"text": "Information practices that use personal, financial, and health-related information are governed by US laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must properly be aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These \"rules\" are often precursors to software requirements that must undergo considerable refinement and analysis before they become implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology for directly extracting access rights and obligations from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.",
"title": ""
},
{
"docid": "ca5f251364ddf21e4cecf25cda5b575d",
"text": "This paper discusses \"bioink\", bioprintable materials used in three dimensional (3D) bioprinting processes, where cells and other biologics are deposited in a spatially controlled pattern to fabricate living tissues and organs. It presents the first comprehensive review of existing bioink types including hydrogels, cell aggregates, microcarriers and decellularized matrix components used in extrusion-, droplet- and laser-based bioprinting processes. A detailed comparison of these bioink materials is conducted in terms of supporting bioprinting modalities and bioprintability, cell viability and proliferation, biomimicry, resolution, affordability, scalability, practicality, mechanical and structural integrity, bioprinting and post-bioprinting maturation times, tissue fusion and formation post-implantation, degradation characteristics, commercial availability, immune-compatibility, and application areas. The paper then discusses current limitations of bioink materials and presents the future prospects to the reader.",
"title": ""
},
{
"docid": "dd45f296e623857262bd65e5d3843f33",
"text": "In their original versions, nature-inspired search algorithms such as evolutionary algorithms and those based on swarm intelligence, lack a mechanism to deal with the constraints of a numerical optimization problem. Nowadays, however, there exists a considerable amount of research devoted to design techniques for handling constraints within a nature-inspired algorithm. This paper presents an analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms. From them, the most popular approaches are analyzed in more detail. For each of them, some representative instantiations are further discussed. In the last part of the paper, some of the future trends in the area, which have been only scarcely explored, are briefly discussed and then the conclusions of this paper are presented. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5e4660c0f9e5144a496de13b0f7c35b3",
"text": "Deep learning techniques have achieved success in aspect-based sentiment analysis in recent years. However, there are two important issues that still remain to be further studied, i.e., 1) how to efficiently represent the target especially when the target contains multiple words; 2) how to utilize the interaction between target and left/right contexts to capture the most important words in them. In this paper, we propose an approach, called left-centerright separated neural network with rotatory attention (LCR-Rot), to better address the two problems. Our approach has two characteristics: 1) it has three separated LSTMs, i.e., left, center and right LSTMs, corresponding to three parts of a review (left context, target phrase and right context); 2) it has a rotatory attention mechanism which models the relation between target and left/right contexts. The target2context attention is used to capture the most indicative sentiment words in left/right contexts. Subsequently, the context2target attention is used to capture the most important word in the target. This leads to a two-side representation of the target: left-aware target and right-aware target. We compare our approach on three benchmark datasets with ten related methods proposed recently. The results show that our approach significantly outperforms the state-of-the-art techniques.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "9f04f8b2adc1c3afe23f8c2202528734",
"text": "Fluorodeoxyglucose positron emission tomography (FDG-PET) imaging based 3D topographic brain glucose metabolism patterns from normal controls (NC) and individuals with dementia of Alzheimer's type (DAT) are used to train a novel multi-scale ensemble classification model. This ensemble model outputs a FDG-PET DAT score (FPDS) between 0 and 1 denoting the probability of a subject to be clinically diagnosed with DAT based on their metabolism profile. A novel 7 group image stratification scheme is devised that groups images not only based on their associated clinical diagnosis but also on past and future trajectories of the clinical diagnoses, yielding a more continuous representation of the different stages of DAT spectrum that mimics a real-world clinical setting. The potential for using FPDS as a DAT biomarker was validated on a large number of FDG-PET images (N=2984) obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database taken across the proposed stratification, and a good classification AUC (area under the curve) of 0.78 was achieved in distinguishing between images belonging to subjects on a DAT trajectory and those images taken from subjects not progressing to a DAT diagnosis. Further, the FPDS biomarker achieved state-of-the-art performance on the mild cognitive impairment (MCI) to DAT conversion prediction task with an AUC of 0.81, 0.80, 0.77 for the 2, 3, 5 years to conversion windows respectively.",
"title": ""
},
{
"docid": "228a777c356591c4d1944e645c04a106",
"text": "Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.",
"title": ""
},
{
"docid": "bda2c57a02275e0533f83da1ad46b573",
"text": "In this thesis, we propose a new, scalable probabilistic logic called ProPPR to combine the best of the symbolic and statistical worlds. ProPPR has the rich semantic representation of Prolog, but we associate a feature vector to each clause, such that each clause has a weight vector that can be learned from the training data. Instead of searching over the entire graph for solutions, ProPPR uses a provably-correct approximate personalized PageRank to construct a subgraph for local grounding: the inference time is now independent of the size of the KB. We show that ProPPR can be viewed as a recursive extension to the path ranking algorithm (PRA), and outperforms PRA in the inference task with one million facts from NELL.",
"title": ""
},
{
"docid": "08d59866cf8496573707d46a6cb520d4",
"text": "Healthcare is an integral component in people's lives, especially for the rising elderly population. Medicare is one such healthcare program that provides for the needs of the elderly. It is imperative that these healthcare programs are affordable, but this is not always the case. Out of the many possible factors for the rising cost of healthcare, claims fraud is a major contributor, but its impact can be lessened through effective fraud detection. We propose a general outlier detection model, based on Bayesian inference, using probabilistic programming. Our model provides probability distributions rather than just point values, as with most common outlier detection methods. Credible intervals are also generated to further enhance confidence that the detected outliers should in fact be considered outliers. Two case studies are presented demonstrating our model's effectiveness in detecting outliers. The first case study uses temperature data in order to provide a clear comparison of several outlier detection techniques. The second case study uses a Medicare dataset to showcase our proposed outlier detection model. Our results show that the successful detection of outliers, which indicate possible fraudulent activities, can provide effective and meaningful results for further investigation within medical specialties or by using real-world, medical provider fraud investigation cases.",
"title": ""
},
{
"docid": "14b06c786127363d5bdaee4602b15a42",
"text": "Instant messaging applications continue to grow in popularity as a means of communicating and sharing multimedia files. The information contained within these applications can prove invaluable to law enforcement in the investigation of crimes. Kik messenger is a recently introduced instant messaging application that has become very popular in a short period of time, especially among young users. The novelty of Kik means that there has been little forensic examination conducted on this application. This study addresses this issue by investigating Kik messenger on Apple iOS devices. The goal was to locate and document artefacts created or modified by Kik messenger on devices installed with the latest version of iOS, as well as in iTunes backup files. Once achieved, the secondary goal was to analyse the artefacts to decode and interpret their meaning and by doing so, be able to answer the typical questions faced by forensic investigators. A detailed description of artefacts created or modified by Kik messenger is provided. Results from experiments showed that deleted images are not only recoverable from the device, but can also be located and downloaded from Kik servers. A process to link data from multiple database tables producing accurate chat histories is explained. These outcomes can be used by law enforcement to investigate crimes and by software developers to create tools to recover evidence. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d6959f0cd5ad7a534e99e3df5fa86135",
"text": "In the course of the project Virtual Try-On new VR technologies have been developed, which form the basis for a realistic, three dimensional, (real-time) simulation and visualization of individualized garments put on by virtual counterparts of real customers. To provide this cloning and dressing of people in VR, a complete process chain is being build up starting with the touchless 3-dimensional scanning of the human body up to a photo-realistic 3-dimensional presentation of the virtual customer dressed in the chosen pieces of clothing. The emerging platform for interactive selection and configuration of virtual garments, the „virtual shop“, will be accessible in real fashion boutiques as well as over the internet, thereby supplementing the conventional distribution channels.",
"title": ""
},
{
"docid": "56a8e1384f363adbf116bbb09b01f6f6",
"text": "IMPORTANCE\nMany valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction.\n\n\nOBJECTIVES\nTo present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThe retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification.\n\n\nMAIN OUTCOME AND MEASURE\nAesthetic outcomes were classified as excellent, good, fair, or poor.\n\n\nRESULTS\nPatients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation.\n\n\nCONCLUSIONS\nWe introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging.\n\n\nLEVEL OF EVIDENCE\n4.",
"title": ""
},
{
"docid": "0baf2c97da07f954a76b81f840ccca9e",
"text": "3 Chapter 1 Introduction 1.1 Background: Identification is an action of recognizing or being recognized, in particular, identification of a thing or person from previous exposures or information. Identification these days is quite necessary as for security purposes. It can be done using biometric parameters such as finger prints, I.D scan, face recognition etc. Most probably the first well known example of a facial recognition system is because of Kohonen, who signified that an uncomplicated neural network could execute face recognition for aligned and normalized face images. The sort of network he recruited was by computing a face illustration by estimating the eigenvectors of the face image's autocorrelation pattern; these eigenvectors are currently called as`Eigen faces. But Kohonen's approach was not a real time triumph due to the need for accurate alignment and normalization. In successive years a great number of researchers attempted facial recognition systems based on edges, inter-feature spaces, and various neural network techniques. While many were victorious using small scale databases of aligned samples, but no one significantly directed the alternative practical problem of vast databases where the position and scale of the face was not known. An image is supposed to be outcome of two real variables, defined in the \" real world \" , for example, a(x, y) where 'a' is the amplitude in terms of brightness of the image at the real coordinate position (x, y). It is now practicable to operate multi-dimensional signals with systems that vary from simple digital circuits to complicated circuits, due to modern technology. Image Analysis (input image->computation out) Image Understanding (input image-> high-level interpretation out) 4 In this age of science and technology, images also attain wider opportunity due to the rapidly increasing significance of scientific visualization, for example microarray data in genetic research. To process the image firstly it is transformed into a digital form. Digitization comprises of sampling of image and quantization of sampled values. After transformed into a digital form, processing is performed. It introduces focal attention on image, or improvement of image features such as boundaries, or variation that make a graphic display more effective for representation & study. This technique does not enlarge the intrinsic information content in data. This technique is used to remove the unwanted observed image to reduce the effect of mortifications. Scope and precision of the knowledge of mortifications process and filter design are the basis of …",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] |
scidocsrr
|
3a6a0642f3f8fb04085ef48542bf5173
|
Soldier-worn augmented reality system for tactical icon visualization
|
[
{
"docid": "071d7bc76ae1a23c82789d57f5647f40",
"text": "Applied Research Associates and BAE Systems are working together to develop a wearable augmented reality system under the DARPA ULTRA-Vis program † . Our approach to achieve the objectives of ULTRAVis, called iLeader, incorporates a full color 40° field of view (FOV) see-thru holographic waveguide integrated with sensors for full position and head tracking to provide an unobtrusive information system for operational maneuvers. iLeader will enable warfighters to mark-up the 3D battle-space with symbologic identification of graphical control measures, friendly force positions and enemy/target locations. Our augmented reality display provides dynamic real-time painting of symbols on real objects, a pose-sensitive 360 ̊ representation of relevant object positions, and visual feedback for a variety of system activities. The iLeader user interface and situational awareness graphical representations are highly intuitive, nondisruptive, and always tactically relevant. We used best human-factors practices, system engineering expertise, and cognitive task analysis to design effective strategies for presenting real-time situational awareness to the military user without distorting their natural senses and perception. We present requirements identified for presenting information within a see-through display in combat environments, challenges in designing suitable visualization capabilities, and solutions that enable us to bring real-time iconic command and control to the tactical user community.",
"title": ""
}
] |
[
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "d8042183e064ffba69b54246b17b9ff4",
"text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.",
"title": ""
},
{
"docid": "69a3676fad6416927cb59d818b999002",
"text": "Good medical leadership is vital in delivering high-quality healthcare, and yet medical career progression has traditionally seen leadership lack credence in comparison with technical and academic ability. Individual standards have varied, leading to variations in the quality of medical leadership between different organisations and, on occasions, catastrophic lapses in the standard of care provided to patients. These high-profile events, plus increasing evidence linking clinical leadership to performance of units, has led recently to more focus on leadership development for all doctors, starting earlier and continuing throughout their careers. There is also an increased drive to see doctors take on more significant leadership roles throughout the healthcare system. The achievement of these aims will require doctors to develop strong personal and professional values, a range of non-technical skills that allow them to lead across professional boundaries, and an understanding of the increasingly complex environment in which 21st century healthcare is delivered. Developing these attributes will require dedicated resources and the sophisticated application of a variety of different learning methodologies such as mentoring, coaching, action learning and networking.",
"title": ""
},
{
"docid": "c2a9e206917e004f55ce74a7cb15a8a1",
"text": "There has been a quantum leap in the performance of automated lip reading recently due to the application of neural network sequence models trained on a very large corpus of aligned text and face videos. However, this advance has only been demonstrated for frontal or near frontal faces, and so the question remains: can lips be read in profile to the same standard? The objective of this paper is to answer that question. We make three contributions: first, we obtain a new large aligned training corpus that contains profile faces, and select these using a face pose regressor network; second, we propose a curriculum learning procedure that is able to extend SyncNet [10] (a network to synchronize face movements and speech) progressively from frontal to profile faces; third, we demonstrate lip reading in profile for unseen videos. The trained model is evaluated on a held out test set, and is also shown to far surpass the state of the art on the OuluVS2 multi-view benchmark.",
"title": ""
},
{
"docid": "9ff76c8500a15d1c9b4a980b37bca505",
"text": "The thesis is about linear genetic programming (LGP), a machine learning approach that evolves computer programs as sequences of imperative instructions. Two fundamental differences to the more common tree-based variant (TGP) may be identified. These are the graph-based functional structure of linear genetic programs, on the one hand, and the existence of structurally noneffective code, on the other hand. The two major objectives of this work comprise (1) the development of more advanced methods and variation operators to produce better and more compact program solutions and (2) the analysis of general EA/GP phenomena in linear GP, including intron code, neutral variations, and code growth, among others. First, we introduce efficient algorithms for extracting features of the imperative and functional structure of linear genetic programs. In doing so, especially the detection and elimination of noneffective code during runtime will turn out as a powerful tool to accelerate the time-consuming step of fitness evaluation in GP. Variation operators are discussed systematically for the linear program representation. We will demonstrate that so called effective instruction mutations achieve the best performance in terms of solution quality. These mutations operate only on the (structurally) effective code and restrict the mutation step size to one instruction. One possibility to further improve their performance is to explicitly increase the probability of neutral variations. As a second, more time-efficient alternative we explicitly control the mutation step size on the effective code (effective step size). Minimum steps do not allow more than one effective instruction to change its effectiveness status. That is, only a single node may be connected to or disconnected from the effective graph component. It is an interesting phenomenon that, to some extent, the effective code becomes more robust against destructions over the generations already implicitly. A special concern of this thesis is to convince the reader that there are some serious arguments for using a linear representation. In a crossover-based comparison LGP has been found superior to TGP over a set of benchmark problems. Furthermore, linear solutions turned out to be more compact than tree solutions due to (1) multiple usage of subgraph results and (2) implicit parsimony pressure by structurally noneffective code. The phenomenon of code growth is analyzed for different linear genetic operators. When applying instruction mutations exclusively almost only neutral variations may be held responsible for the emergence and propagation of intron code. It is noteworthy that linear genetic programs may not grow if all neutral variation effects are rejected and if the variation step size is minimum. For the same reasons effective instruction mutations realize an implicit complexity control in linear GP which reduces a possible negative effect of code growth to a minimum. Another noteworthy result in this context is that program size is strongly increased by crossover while it is hardly influenced by mutation even if step sizes are not explicitly restricted.",
"title": ""
},
{
"docid": "2b5ade239beea52315e50e0d4fde197f",
"text": "The ultimate goal of research is to produce dependable knowledge or to provide the evidence that may guide practical decisions. Statistical conclusion validity (SCV) holds when the conclusions of a research study are founded on an adequate analysis of the data, generally meaning that adequate statistical methods are used whose small-sample behavior is accurate, besides being logically capable of providing an answer to the research question. Compared to the three other traditional aspects of research validity (external validity, internal validity, and construct validity), interest in SCV has recently grown on evidence that inadequate data analyses are sometimes carried out which yield conclusions that a proper analysis of the data would not have supported. This paper discusses evidence of three common threats to SCV that arise from widespread recommendations or practices in data analysis, namely, the use of repeated testing and optional stopping without control of Type-I error rates, the recommendation to check the assumptions of statistical tests, and the use of regression whenever a bivariate relation or the equivalence between two variables is studied. For each of these threats, examples are presented and alternative practices that safeguard SCV are discussed. Educational and editorial changes that may improve the SCV of published research are also discussed.",
"title": ""
},
{
"docid": "e075b1870628a92c3d96e6a7a05c7037",
"text": "The two major intracellular protein degradation systems, the ubiquitin-proteasome system (UPS) and autophagy, work collaboratively in many biological processes including development, apoptosis, aging, and countering oxidative injuries. We report here that, in human retinal pigment epithelial cells (RPE), ARPE-19 cells, proteasome inhibitors, clasto-lactacystinβ-lactone (LA) or epoxomicin (Epo), at non-lethal doses, increased the protein levels of autophagy-specific genes Atg5 and Atg7 and enhanced the conversion of microtubule-associated protein light chain (LC3) from LC3-I to its lipidative form, LC3-II, which was enhanced by co-addition of the saturated concentration of Bafilomycin A1 (Baf). Detection of co-localization for LC3 staining and labeled-lysosome further confirmed autophagic flux induced by LA or Epo. LA or Epo reduced the phosphorylation of the protein kinase B (Akt), a downstream target of phosphatidylinositol-3-kinases (PI3K), and mammalian target of rapamycin (mTOR) in ARPE-19 cells; by contrast, the induced changes of autophagy substrate, p62, showed biphasic pattern. The autophagy inhibitor, Baf, attenuated the reduction in oxidative injury conferred by treatment with low doses of LA and Epo in ARPE-19 cells exposed to menadione (VK3) or 4-hydroxynonenal (4-HNE). Knockdown of Atg7 with siRNA in ARPE-19 cells reduced the protective effects of LA or Epo against VK3. Overall, our results suggest that treatment with low levels of proteasome inhibitors confers resistance to oxidative injury by a pathway involving inhibition of the PI3K-Akt-mTOR pathway and activation of autophagy.",
"title": ""
},
{
"docid": "44ffac24ef4d30a8104a2603bb1cdcb1",
"text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.",
"title": ""
},
{
"docid": "5569fa921ab298e25a70d92489b273fc",
"text": "We present Centiman, a system for high performance, elastic transaction processing in the cloud. Centiman provides serializability on top of a key-value store with a lightweight protocol based on optimistic concurrency control (OCC).\n Centiman is designed for the cloud setting, with an architecture that is loosely coupled and avoids synchronization wherever possible. Centiman supports sharded transaction validation; validators can be added or removed on-the-fly in an elastic manner. Processors and validators scale independently of each other and recover from failure transparently to each other. Centiman's loosely coupled design creates some challenges: it can cause spurious aborts and it makes it difficult to implement common performance optimizations for read-only transactions. To deal with these issues, Centiman uses a watermark abstraction to asynchronously propagate information about transaction commits through the system.\n In an extensive evaluation we show that Centiman provides fast elastic scaling, low-overhead serializability for read-heavy workloads, and scales to millions of operations per second.",
"title": ""
},
{
"docid": "8405b35a36235ba26444655a3619812d",
"text": "Studying the reason why single-layer molybdenum disulfide (MoS2) appears to fall short of its promising potential in flexible nanoelectronics, we find that the nature of contacts plays a more important role than the semiconductor itself. In order to understand the nature of MoS2/metal contacts, we perform ab initio density functional theory calculations for the geometry, bonding, and electronic structure of the contact region. We find that the most common contact metal (Au) is rather inefficient for electron injection into single-layer MoS2 and propose Ti as a representative example of suitable alternative electrode materials.",
"title": ""
},
{
"docid": "b57fa292a5357b9cab294f62e63e0e81",
"text": "Emoji, a set of pictographic Unicode characters, have seen strong uptake over the last couple of years. All common mobile platforms and many desktop systems now support emoji entry, and users have embraced their use. Yet, we currently know very little about what makes for good emoji entry. While soft keyboards for text entry are well optimized, based on language and touch models, no such information exists to guide the design of emoji keyboards. In this article, we investigate of the problem of emoji entry, starting with a study of the current state of the emoji keyboard implementation in Android. To enable moving forward to novel emoji keyboard designs, we then explore a model for emoji similarity that is able to inform such designs. This semantic model is based on data from 21 million collected tweets containing emoji. We compare this model against a solely description-based model of emoji in a crowdsourced study. Our model shows good perfor mance in capturing detailed relationships between emoji.",
"title": ""
},
{
"docid": "ff4c2f1467a141894dbe76491bc06d3b",
"text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.",
"title": ""
},
{
"docid": "0b357696dd2b68a7cef39695110e4e1b",
"text": "Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.",
"title": ""
},
{
"docid": "63ae128637d0855ca1b09793314aad03",
"text": "Gray platelet syndrome (GPS) is a predominantly recessive platelet disorder that is characterized by mild thrombocytopenia with large platelets and a paucity of α-granules; these abnormalities cause mostly moderate but in rare cases severe bleeding. We sequenced the exomes of four unrelated individuals and identified NBEAL2 as the causative gene; it has no previously known function but is a member of a gene family that is involved in granule development. Silencing of nbeal2 in zebrafish abrogated thrombocyte formation.",
"title": ""
},
{
"docid": "72cd858344bb5e0a878dd05fc8d07044",
"text": "This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning frommembership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.",
"title": ""
},
{
"docid": "1262f62fd1f9ca15ffad758da29cc1e2",
"text": "Analysis of networks and in particular discovering communities within networks has been a focus of recent work in several fields and has diverse applications. Most community detection methods focus on partitioning the entire network into communities, with the expectation of many ties within communities and few ties between. However, many networks contain nodes that do not fit in with any of the communities, and forcing every node into a community can distort results. Here we propose a new framework that extracts one community at a time, allowing for arbitrary structure in the remainder of the network, which can include weakly connected nodes. The main idea is that the strength of a community should depend on ties between its members and ties to the outside world, but not on ties between nonmembers. The proposed extraction criterion has a natural probabilistic interpretation in a wide class of models and performs well on simulated and real networks. For the case of the block model, we establish asymptotic consistency of estimated node labels and propose a hypothesis test for determining the number of communities.",
"title": ""
},
{
"docid": "e7ba504d2d9a80c0a10bfa4830a1fc54",
"text": "BACKGROUND\nGlobal and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment.\n\n\nMETHODS\nWe did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12).\n\n\nFINDINGS\nGlobally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9-65·4) were blind (crude prevalence 0·48%; 80% UI 0·17-0·87; 56% female), 216·6 million (80% UI 98·5-359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34-4·89; 55% female), and 188·5 million (80% UI 64·5-350·2) had mild visual impairment (2·57%, 80% UI 0·88-4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1-1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9-997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9-57·3) in 1990 to 36·0 million (80% UI 12·9-65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (-36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3-270·0) in 1990 to 216·6 million (80% UI 98·5-359·1) in 2015.\n\n\nINTERPRETATION\nThere is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels.\n\n\nFUNDING\nBrien Holden Vision Institute.",
"title": ""
},
{
"docid": "18f95e8a2251e7bd582536c841070961",
"text": "This paper proposes and implements the concept of flexible induction heating based on the magnetic resonant coupling (MRC) mechanism. In conventional induction heating systems, the variation of the relative position between the heater and workpiece significantly deteriorates the heating performance. In particular, the heating effect dramatically reduces with the increase of vertical displacement or horizontal misalignment. This paper utilizes the MRC mechanism to effectuate flexible induction heating; thus, handling the requirements of varying vertical displacement and horizontal misalignment for various cooking styles. Differing from a conventional induction heating, the proposed induction heating adopts one resonant coil in the heater and one resonant coil in the workpiece, which can significantly strengthen the coupling effect, and, hence, the heating effect. Both the simulation and experimental results are given to validate the feasibility and flexibility of the proposed induction heating.",
"title": ""
},
{
"docid": "d698d49a82829a2bb772d1c3f6c2efc5",
"text": "The concepts of Data Warehouse, Cloud Computing and Big Data have been proposed during the era of data flood. By reviewing current progresses in data warehouse studies, this paper introduces a framework to achieve better visualization for Big Data. This framework can reduce the cost of building Big Data warehouses by divide data into sub dataset and visualize them respectively. Meanwhile, basing on the powerful visualization tool of D3.js and directed by the principle of Whole-Parts, current data can be presented to users from different dimensions by different rich statistics graphics.",
"title": ""
},
{
"docid": "c62c4aad141e489c6fb3f38a15e782f2",
"text": "This research aimed at developing a theoretical framework to predict the next obfuscation (or deobfuscation) move of the adversary, with the intent of making cyber defense proactive. More specifically, the goal was to understand the relationship between obfuscation and deobfuscation techniques employed in malware offense and defense. The strategy was to build upon previous work of Giacobazzi and Dalla Preda on modeling obfuscation and deobfuscation as abstract interpretations. It furthers that effort by developing an analytical model of the best obfuscation with respect to a deobfuscator. In addition, this research aimed at developing cost models for obfuscation and deobfuscations. The key findings of this research include: a theoretical model of computing the best obfuscation for a deobfuscator, a method for context-sensitive analysis of obfuscated code, a method for learning obfuscation transformations used by a metamorphic engine, several insights into the use of machine learning in deobfuscation, and game-theoretic models of certain scenarios of offense-defense games in software protection.",
"title": ""
}
] |
scidocsrr
|
2ce6d6fd225ce45117b21ef23838d9ef
|
The fuzzy logic method for innovation performance measurement
|
[
{
"docid": "b90b7b44971cf93ba343b5dcdd060875",
"text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.",
"title": ""
}
] |
[
{
"docid": "7436bf163d0dcf6d2fbe8ccf66431caf",
"text": "Zh h{soruh ehkdylrudo h{sodqdwlrqv iru vxe0rswlpdo frusrudwh lqyhvwphqw ghflvlrqv1 Irfxvlqj rq wkh vhqvlwlylw| ri lqyhvwphqw wr fdvk rz/ zh dujxh wkdw shuvrqdo fkdudfwhulvwlfv ri fklhi h{hfxwlyh r fhuv/ lq sduwlfxodu ryhufrq ghqfh/ fdq dffrxqw iru wklv zlghvsuhdg dqg shuvlvwhqw lqyhvwphqw glvwruwlrq1 Ryhufrq ghqw FHRv ryhuhvwlpdwh wkh txdolw| ri wkhlu lqyhvwphqw surmhfwv dqg ylhz h{whuqdo qdqfh dv xqgxo| frvwo|1 Dv d uhvxow/ wkh| lqyhvw pruh zkhq wkh| kdyh lqwhuqdo ixqgv dw wkhlu glvsrvdo1 Zh whvw wkh ryhufrq ghqfh k|srwkhvlv/ xvlqj gdwd rq shuvrqdo sruwirolr dqg frusrudwh lqyhvwphqw ghflvlrqv ri FHRv lq Iruehv 833 frpsdqlhv1 Zh fodvvli| FHRv dv ryhufrq ghqw li wkh| uhshdwhgo| idlo wr h{huflvh rswlrqv wkdw duh kljko| lq wkh prqh|/ ru li wkh| kdelwxdoo| dftxluh vwrfn ri wkhlu rzq frpsdq|1 Wkh pdlq uhvxow lv wkdw lqyhvwphqw lv vljql fdqwo| pruh uhvsrqvlyh wr fdvk rz li wkh FHR glvsod|v ryhufrq ghqfh1 Lq dgglwlrq/ zh lghqwli| shuvrqdo fkdudfwhulvwlfv rwkhu wkdq ryhufrq ghqfh +hgxfdwlrq/ hpsor|phqw edfnjurxqg/ frkruw/ plolwdu| vhuylfh/ dqg vwdwxv lq wkh frpsdq|, wkdw vwurqjo| d hfw wkh fruuhodwlrq ehwzhhq lqyhvwphqw dqg fdvk rz1",
"title": ""
},
{
"docid": "002c83aada3dbbc19a1da7561c53fc4b",
"text": "The Swedish preschool is an important socializing agent because the great majority of children aged, from 1 to 5 years, are enrolled in an early childhood education program. This paper explores how preschool teachers and children, in an ethnically diverse preschool, negotiate the meaning of cultural traditions celebrated in Swedish preschools. Particular focus is given to narrative representations of cultural traditions as they are co-constructed and negotiated in preschool practice between teachers and children. Cultural traditions are seen as shared events in the children’s preschool life, as well as symbolic resources which enable children and preschool teachers to conceive themselves as part of a larger whole. The data analyzed are three videotaped circle time events focused on why a particular tradition is celebrated. Methodologically the analysis builds on a narrative approach inspired by Bakhtin’s notion of addressivity and on Alexander’s ideas about dialogic teaching. The results of the analysis show that the teachers attempt to achieve a balance between transferring traditional cultural and religious values and realizing a child-centered pedagogy, emphasizing the child’s initiative. The analyses also show that narratives with a religious tonality generate some uncertainty on how to communicate with the children about the traditions that are being discussed. These research findings are important because, in everyday practice, preschool teachers enact whether religion is regarded as an essential part of cultural socialization, while acting both as keepers of traditions and agents of change.",
"title": ""
},
{
"docid": "2006a3fd87a3d7228b2a25061f7eb06b",
"text": "Thailand suffers from frequent flooding during the monsoon season and droughts in summer. In some places, severe cases of both may even occur. Managing water resources effectively requires a good information system for decision-making. There is currently a lack in knowledge sharing between organizations and researchers responsible. These are the experts in monitoring and controlling the water supply and its conditions. The knowledge owned by these experts are not captured, classified and integrated into an information system for decisionmaking. Ontologies are formal knowledge representation models. Knowledge management and artificial intelligence technology is a basic requirement for developing ontology-based semantic search on the Web. In this paper, we present ontology modeling approach that is based on the experiences of the researchers. The ontology for drought management consists of River Basin Ontology, Statistics Ontology and Task Ontology to facilitate semantic match during search. The hybrid ontology architecture can also be used for drought management",
"title": ""
},
{
"docid": "88a282d44199d47f9694eaac8efee370",
"text": "The mobile data traffic is expected to grow beyond 1000 times by 2020 compared with it in 2010. In order to support 1000 times of capacity increase, improving spectrum efficiency is one of the important approaches. Meanwhile, in Long Term Evolution (LTE)-Advanced, small cell and hotspot are important scenarios for future network deployment to increase the capacity from the network density domain. Under such environment, the probability of high Signal to Interference plus Noise Ratio (SINR) region becomes larger which brings the possibility of introducing higher order modulation, i.e., 256 Quadrature Amplitude Modulation(QAM) to improve the spectrum efficiency. Channel quality indicator (CQI) table design is a key issue to support 256 QAM. In this paper, we investigate the feasibility of 256 QAM by SINR geometry and propose two methods on CQI table design to support the 256 QAM transmission. Simulation results show proposed methods can improve average user equipment (UE) throughput and cell center UE throughput with almost no loss on cell edge UE throughput.",
"title": ""
},
{
"docid": "919ce1951d219970a05086a531b9d796",
"text": "Anti-neutrophil cytoplasmic autoantibodies (ANCA) and anti-glomerular basement membrane (GBM) necrotizing and crescentic glomerulonephritis are aggressive and destructive glomerular diseases that are associated with and probably caused by circulating ANCA and anti-GBM antibodies. These necrotizing lesions are manifested by acute nephritis and deteriorating kidney function often accompanied by distinctive clinical features of systemic disease. Prompt diagnosis requires clinical acumen that allows for the prompt institution of therapy aimed at removing circulating autoantibodies and quelling the inflammatory process. Continuing exploration of the etiology and pathogenesis of these aggressive inflammatory diseases have gradually uncovered new paradigms for the cause of and more specific therapy for these particular glomerular disorders and for autoimmune glomerular diseases in general.",
"title": ""
},
{
"docid": "71eff09febe33961acd9225487bd9b1c",
"text": "Recent market studies reveal that augmented reality (AR) devices, such as smart glasses, will substantially influence the media landscape. Yet, little is known about the intended adoption of smart glasses, particularly: Who are the early adopters of such wearables? We contribute to the growing body of research that investigates the role of personality in predicting media usage by analyzing smart glasses, such as Google Glass or Microsoft Hololens. First, we integrate AR devices into the current evolution of media and technologies. Then, we draw on the Big Five Model of human personality and present the results from two studies that investigate the direct and moderating effects of human personality on the awareness and innovation adoption of smart glasses. Our results show that open and emotionally stable consumers tend to be more aware of Google Glass. Consumers who perceive the potential for high functional benefits and social conformity of smart glasses are more likely to adopt such wearables. The strength of these effects is moderated by consumers’ individual personality, particularly by their levels of openness to experience, extraversion and neuroticism. This article concludes with a discussion of theoretical and managerial implications for research on technology adoption, and with suggestions for avenues for future research. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d4570f189544b0c21c8b431b1e70e0a2",
"text": "A novel transform-domain image watermark based on chaotic sequences is proposed in this paper. A complex chaos-based scheme is developed to embed a gray-level image in the wavelet domain of the original color image signal. The chaos system plays an important role in the security and invisibility of the proposed scheme. The parameter and initial state of chaos system directly influence the generation of watermark information as a key. Meanwhile, the watermark information has the property of spread spectrum signal by chaotic sequence. To improve the invisibility of watermarked image Computer simulation results show that the proposed algorithm is imperceptible and is robust to most watermarking attacks, especially to image cropping, JPEG compression and multipliable noise.",
"title": ""
},
{
"docid": "4b03e14540f4f38398dfea2dcd9950be",
"text": "This paper presents a simple method for segmenting colour regions into categories like red, green, blue, and yellow. We are interested in studying how colour categories influence colour selection during scientific visualization. The ability to name individual colours is also important in other problem domains like real-time displays, user-interface design, and medical imaging systems. Our algorithm uses the Munsell and CIE LUV colour models to automatically segment a colour space like RGB or CIE XYZ into ten colour categories. Users are then asked to name a small number of representative colours from each category. This provides three important results: a measure of the perceptual overlap between neighbouring categories, a measure of a category’s strength, and a user-chosen name for each strong category. We evaluated our technique by segmenting known colour regions from the RGB, HSV, and CIE LUV colour models. The names we obtained were accurate, and the boundaries between different colour categories were well defined. We concluded our investigation by conducting an experiment to obtain user-chosen names and perceptual overlap for ten colour categories along the circumference of a colour wheel in CIE LUV.",
"title": ""
},
{
"docid": "a86f7affdebbdfd2b07494c5a6603670",
"text": "Citation network analysis is an effective tool to analyze the structure of scientific research. Clustering is often used to visualize scientific domain and to detect emerging research front there. While we often set arbitrarily clustering threshold, there is few guide to set appropriate threshold. This study analyzed basic process how clustering of citation network proceeds by tracking size and modularity change during clustering. We found that there are three stages in clustering of citation networks and it is universal across our case studies. In the first stage, core clusters in the domain are formed. In the second stage, peripheral clusters are formed, while core clusters continue to grow. In the third stage, core clusters grow again. We found the minimum corpus size around one hundred assuring the clustering. When the corpus size is less than one hundred, clustered network structure tends to be more random. In addition even for the corpus whose size is larger than it, the clustering quality for some clusters formed in the later stage is low. These results give a fundamental guidance to the user of citation network analysis.",
"title": ""
},
{
"docid": "2cbae69bfb5d1379383cd1cf3e1237ef",
"text": "TerraSAR-X, the first civil German synthetic aperture radar (SAR) satellite has been successfully launched in 2007, June 15th. After 4.5 days the first processed image has been obtained. The overall quality of the image was outstanding, however, suspicious features could be identified which showed precipitation related signatures. These rain-cell signatures motivated a further in-depth study of the physical background of the related propagation effects. During the commissioning phase, a total of 12000 scenes have been investigated for potential propagation effects and about 100 scenes have revealed atmospheric effects to a visible extent. An interesting case of a data acquisition over New York will be presented which shows typical rain-cell signatures and the SAR image will be compared with weather-radar data acquired nearly simultaneously (within the same minute). Furthermore, in this contribution we discuss the influence of the atmosphere (troposphere) on the external calibration (XCAL) of TerraSAR-X. By acquiring simultaneous weather-radar data over the test-site and the SAR-acquisition it was possibleto improve the absolute calibration constant by 0.15 dB.",
"title": ""
},
{
"docid": "2d254443a7cbe748250acc0070c4a08b",
"text": "This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps. First, we use a multinomial logistic regression (MLR) model to learn the class posterior probability distributions. This is done by using a recently introduced logistic regression via splitting and augmented Lagrangian algorithm. Second, we use the information acquired in the previous step to segment the hyperspectral image using a multilevel logistic prior that encodes the spatial information. In order to reduce the cost of acquiring large training sets, active learning is performed based on the MLR posterior probabilities. Another contribution of this paper is the introduction of a new active sampling approach, called modified breaking ties, which is able to provide an unbiased sampling. Furthermore, we have implemented our proposed method in an efficient way. For instance, in order to obtain the time-consuming maximum a posteriori segmentation, we use the α-expansion min-cut-based integer optimization algorithm. The state-of-the-art performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently introduced hyperspectral image analysis methods.",
"title": ""
},
{
"docid": "90d5aca626d61806c2af3cc551b28c90",
"text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.",
"title": ""
},
{
"docid": "087fa20eb48166cb337a0285a583451a",
"text": "In this article, we present a method to perform automatic player trajectories mapping based on player detection, unsupervised labeling, efficient multi-object tracking, and playfield registration in broadcast soccer videos. Player detector determines the players' positions and scales by combining the ability of dominant color based background subtraction and a boosting detector with Haar features. We first learn the dominant color with accumulate color histogram at the beginning of processing, then use the player detector to collect hundreds of player samples, and learn player appearance codebook by unsupervised clustering. In a soccer game, a player can be labeled as one of four categories: two teams, referee or outlier. The learning capability enables the method to be generalized well to different videos without any manual initialization. With the dominant color and player appearance model, we can locate and label each player. After that, we perform multi-object tracking by using Markov Chain Monte Carlo (MCMC) data association to generate player trajectories. Some data driven dynamics are proposed to improve the Markov chain's efficiency, such as label consistency, motion consistency, and track length, etc. Finally, we extract key-points and find the mapping from an image plane to the standard field model, and then map players' position and trajectories to the field. A large quantity of experimental results on FIFA World Cup 2006 videos demonstrate that this method can reach high detection and labeling precision, reliably tracking in scenes of player occlusion, moderate camera motion and pose variation, and yield promising field registration results.",
"title": ""
},
{
"docid": "8beba75a7f089a02385b0532a5255636",
"text": "Recently, doc2vec has achieved excellent results in different tasks (Lau and Baldwin, 2016). In this paper, we present a context aware variant of doc2vec. We introduce a novel weight estimating mechanism that generates weights for each word occurrence according to its contribution in the context, using deep neural networks. Our context aware model can achieve similar results compared to doc2vec initialized by Wikipedia trained vectors, while being much more efficient and free from heavy external corpus. Analysis of context aware weights shows they are a kind of enhanced IDF weights that capture sub-topic level keywords in documents. They might result from deep neural networks that learn hidden representations with the least entropy.",
"title": ""
},
{
"docid": "da1f4117851762bfb5ef80c0893248c3",
"text": "The recently-developed WaveNet architecture (van den Oord et al., 2016a) is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.",
"title": ""
},
{
"docid": "06ac34a4909ab44872ee8dc4656b22e7",
"text": "Moringa oleifera is an interesting plant for its use in bioactive compounds. In this manuscript, we review studies concerning the cultivation and production of moringa along with genetic diversity among different accessions and populations. Different methods of propagation, establishment and cultivation are discussed. Moringa oleifera shows diversity in many characters and extensive morphological variability, which may provide a resource for its improvement. Great genetic variability is present in the natural and cultivated accessions, but no collection of cultivated and wild accessions currently exists. A germplasm bank encompassing the genetic variability present in Moringa is needed to perform breeding programmes and develop elite varieties adapted to local conditions. Alimentary and medicinal uses of moringa are reviewed, alongside the production of biodiesel. Finally, being that the leaves are the most used part of the plant, their contents in terms of bioactive compounds and their pharmacological properties are discussed. Many studies conducted on cell lines and animals seem concordant in their support for these properties. However, there are still too few studies on humans to recommend Moringa leaves as medication in the prevention or treatment of diseases. Therefore, further studies on humans are recommended.",
"title": ""
},
{
"docid": "fee9c0dbf2cbe0107b7a999694f293ca",
"text": "In traditional approaches for clustering market basket type data, relations among transactions are modeled according to the items occurring in these transactions. However, an individual item might induce different relations in different contexts. Since such contexts might be captured by interesting patterns in the overall data, we represent each transaction as a set of patterns through modifying the conventional pattern semantics. By clustering the patterns in the dataset, we infer a clustering of the transactions represented this way. For this, we propose a novel hypergraph model to represent the relations among the patterns. Instead of a local measure that depends only on common items among patterns, we propose a global measure that is based on the cooccurences of these patterns in the overall data. The success of existing hypergraph partitioning based algorithms in other domains depends on sparsity of the hypergraph and explicit objective metrics. For this, we propose a two-phase clustering approach for the above hypergraph, which is expected to be dense. In the first phase, the vertices of the hypergraph are merged in a multilevel algorithm to obtain large number of high quality clusters. Here, we propose new quality metrics for merging decisions in hypergraph clustering specifically for this domain. In order to enable the use of existing metrics in the second phase, we introduce a vertex-to-cluster affinity concept to devise a method for constructing a sparse hypergraph based on the obtained clustering. The experiments we have performed show the effectiveness of the proposed framework.",
"title": ""
},
{
"docid": "fda354376533cde56d00a34cecf02a31",
"text": "Support for fine-grained data management has all but disappeared from modern operating systems such as Android and iOS. Instead, we must rely on each individual application to manage our data properly – e.g., to delete our emails, documents, and photos in full upon request; to not collect more data than required for its function; and to back up our data to reliable backends. Yet, research studies and media articles constantly remind us of the poor data management practices applied by our applications. We have developed Pebbles, a fine-grained data management system that enables management at a powerful new level of abstraction: application-level data objects, such as emails, documents, notes, notebooks, bank accounts, etc. The key contribution is Pebbles’s ability to discover such high-level objects in arbitrary applications without requiring any input from or modifications to these applications. Intuitively, it seems impossible for an OS-level service to understand object structures in unmodified applications, however we observe that the high-level storage abstractions embedded in modern OSes – relational databases and object-relational mappers – bear significant structural information that makes object recognition possible and accurate.",
"title": ""
},
{
"docid": "aa4b36c95058177167c58d4e192c8c1d",
"text": "Face detection is a prominent research domain in the field of digital image processing. Out of various algorithms developed so far, Viola–Jones face detection has been highly successful. However, because of its complex nature, there is need to do more exploration in its various phases including training as well as actual face detection to find the scope of further improvement in terms of efficiency as well as accuracy under various constraints so as to detect and process the faces in real time. Its training phase for the screening of large amount of Haar features and generation of cascade classifiers is quite tedious and computationally intensive task. Any modification for improvement in its features or cascade classifiers requires re-training of all the features through example images, which are very large in number. Therefore, there is need to enhance the computational efficiency of training process of Viola–Jones face detection algorithm so that further enhancement in this framework is made easy. There are three main contributions in this research work. Firstly, we have achieved a considerable speedup by parallelizing the training as well as detection of rectangular Haar features based upon Viola–Jones framework on GPU. Secondly, the analysis of features selected through AdaBoost has been done, which can give intuitiveness in developing more innovative and efficient techniques for selecting competitive classifiers for the task of face detection, which can further be generalized for any type of object detection. Thirdly, implementation of parallelization techniques of modified version of Viola–Jones face detection algorithm in combination with skin color filtering to reduce the search space has been done. We have been able to achieve considerable reduction in the search space and time cost by using the skin color filtering in conjunction with the Viola–Jones algorithm. Time cost reduction of the order of 54.31% at the image resolution of 640*480 of GPU time versus CPU time has been achieved by the proposed parallelized algorithm.",
"title": ""
}
] |
scidocsrr
|
eab7c4f6ddc540a909de26b04a26f580
|
Addressing ambiguity in multi-target tracking by hierarchical strategy
|
[
{
"docid": "198311a68ad3b9ee8020b91d0b029a3c",
"text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"title": ""
}
] |
[
{
"docid": "60f31d60213abe65faec3eb69edb1eea",
"text": "In this paper, a novel multi-layer four-way out-of-phase power divider based on substrate integrated waveguide (SIW) is proposed. The four-way power division is realized by 3-D mode coupling; vertical partitioning of a SIW followed by lateral coupling to two half-mode SIW. The measurement results show the excellent insertion loss (S<inf>21</inf>, S<inf>31</inf>, S<inf>41</inf>, S<inf>51</inf>: −7.0 ± 0.5 dB) and input return loss (S<inf>11</inf>: −10 dB) in X-band (7.63 GHz ∼ 11.12 GHz). We expect that the proposed power divider play an important role for the integration of compact multi-way SIW circuits.",
"title": ""
},
{
"docid": "337afded77b22d4e1460569c561cad1a",
"text": "The mammalian hippocampus is critical for spatial information processing and episodic memory. Its primary output cells, CA1 pyramidal cells (CA1 PCs), vary in genetics, morphology, connectivity, and electrophysiological properties. It is therefore possible that distinct CA1 PC subpopulations encode different features of the environment and differentially contribute to learning. To test this hypothesis, we optically monitored activity in deep and superficial CA1 PCs segregated along the radial axis of the mouse hippocampus and assessed the relationship between sublayer dynamics and learning. Superficial place maps were more stable than deep during head-fixed exploration. Deep maps, however, were preferentially stabilized during goal-oriented learning, and representation of the reward zone by deep cells predicted task performance. These findings demonstrate that superficial CA1 PCs provide a more stable map of an environment, while their counterparts in the deep sublayer provide a more flexible representation that is shaped by learning about salient features in the environment. VIDEO ABSTRACT.",
"title": ""
},
{
"docid": "6976614013c1aa550b5e506b1d1203e7",
"text": "Here we present an overview of various techniques performed concomitantly during penile prosthesis surgery to enhance penile length and girth. We report on the technique of ventral phalloplasty and its outcomes along with augmentation corporoplasty, suprapubic lipectomy, suspensory ligament release, and girth enhancement procedures. For the serious implanter, outcomes can be improved by combining the use of techniques for each scar incision. These adjuvant procedures are a key addition in the armamentarium for the serious implant surgeon.",
"title": ""
},
{
"docid": "d495f9ae71492df9225249147563a3d9",
"text": "The control of a PWM rectifier with LCL-filter using a minimum number of sensors is analyzed. In addition to the DC-link voltage either the converter or line current is measured. Two different ways of current control are shown, analyzed and compared by simulations as well as experimental investigations. Main focus is spent on active damping of the LCL filter resonance and on robustness against line inductance variations.",
"title": ""
},
{
"docid": "e3e4d19aa9a5db85f30698b7800d2502",
"text": "In this paper we examine the use of a mathematical procedure, called Principal Component Analysis, in Recommender Systems. The resulting filtering algorithm applies PCA on user ratings and demographic data, aiming to improve various aspects of the recommendation process. After a brief introduction to PCA, we provide a discussion of the proposed PCADemog algorithm, along with possible ways of combining it with different sources of filtering data. The experimental part of this work tests distinct parameterizations for PCA-Demog, identifying those with the best performance. Finally, the paper compares their results with those achieved by other filtering approaches, and draws interesting conclusions.",
"title": ""
},
{
"docid": "f639aa4b80593934d4714a77ad0dde92",
"text": "Moringa Oleifera (MO), a plant from the family Moringacea is a major crop in Asia and Africa. MO has been studied for its health properties, attributed to the numerous bioactive components, including vitamins, phenolic acids, flavonoids, isothiocyanates, tannins and saponins, which are present in significant amounts in various components of the plant. Moringa Oleifera leaves are the most widely studied and they have shown to be beneficial in several chronic conditions, including hypercholesterolemia, high blood pressure, diabetes, insulin resistance, non-alcoholic liver disease, cancer and overall inflammation. In this review, we present information on the beneficial results that have been reported on the prevention and alleviation of these chronic conditions in various animal models and in cell studies. The existing limited information on human studies and Moringa Oleifera leaves is also presented. Overall, it has been well documented that Moringa Oleifera leaves are a good strategic for various conditions associated with heart disease, diabetes, cancer and fatty liver.",
"title": ""
},
{
"docid": "893f3d5ab013a9c156139ef2626b7053",
"text": "Intelligent systems capable of automatically understanding natural language text are important for many artificial intelligence applications including mobile phone voice assistants, computer vision, and robotics. Understanding language often constitutes fitting new information into a previously acquired view of the world. However, many machine reading systems rely on the text alone to infer its meaning. In this paper, we pursue a different approach; machine reading methods that make use of background knowledge to facilitate language understanding. To this end, we have developed two methods: The first method addresses prepositional phrase attachment ambiguity. It uses background knowledge within a semi-supervised machine learning algorithm that learns from both labeled and unlabeled data. This approach yields state-of-the-art results on two datasets against strong baselines; The second method extracts relationships from compound nouns. Our knowledge-aware method for compound noun analysis accurately extracts relationships and significantly outperforms a baseline that does not make use of background knowledge.",
"title": ""
},
{
"docid": "0b97ba6017a7f94ed34330555095f69a",
"text": "In response to stress, the brain activates several neuropeptide-secreting systems. This eventually leads to the release of adrenal corticosteroid hormones, which subsequently feed back on the brain and bind to two types of nuclear receptor that act as transcriptional regulators. By targeting many genes, corticosteroids function in a binary fashion, and serve as a master switch in the control of neuronal and network responses that underlie behavioural adaptation. In genetically predisposed individuals, an imbalance in this binary control mechanism can introduce a bias towards stress-related brain disease after adverse experiences. New candidate susceptibility genes that serve as markers for the prediction of vulnerable phenotypes are now being identified.",
"title": ""
},
{
"docid": "7cbea1103832c97b22bfe8d1c174bd64",
"text": "Large amount of user generated data is present on web as blogs, reviews tweets, comments etc. This data involve user’s opinion, view, attitude, sentiment towards particular product, topic, event, news etc. Opinion mining (sentiment analysis) is a process of finding users’ opinion from user-generated content. Opinion summarization is useful in feedback analysis, business decision making and recommendation systems. In recent years opinion mining is one of the popular topics in Text mining and Natural Language Processing. This paper presents the methods for opinion extraction, classification, and summarization. This paper also explains different approaches, methods and techniques used in process of opinion mining and summarization, and comparative study of these different methods. Keywords— Natural Language Processing, Opinion Mining, Opinion Summarization.",
"title": ""
},
{
"docid": "21c3a2ac5f8a38eefb7337b166ad47ad",
"text": "Although great progresses have been made in automatic speech recognition (ASR), significant performance degradation is still observed when recognizing multi-talker mixed speech. In this paper, we propose and evaluate several architectures to address this problem under the assumption that only a single channel of mixed signal is available. Our technique extends permutation invariant training (PIT) by introducing the frontend feature separation module with the minimum mean square error (MSE) criterion and the back-end recognition module with the minimum cross entropy (CE) criterion. More specifically, during training we compute the average MSE or CE over the whole utterance for each possible utterance-level output-target assignment, pick the one with the minimum MSE or CE, and optimize for that assignment. This strategy elegantly solves the label permutation problem observed in the deep learning based multi-talker mixed speech separation and recognition systems. The proposed architectures are evaluated and compared on an artificially mixed AMI dataset with both twoand threetalker mixed speech. The experimental results indicate that our proposed architectures can cut the word error rate (WER) by 45.0% and 25.0% relatively against the state-of-the-art singletalker speech recognition system across all speakers when their energies are comparable, for twoand three-talker mixed speech, respectively. To our knowledge, this is the first work on the multi-talker mixed speech recognition on the challenging speakerindependent spontaneous large vocabulary continuous speech task. Keywords—permutation invariant training, multi-talker mixed speech recognition, feature separation, joint-optimization",
"title": ""
},
{
"docid": "8953837ae11284b4be15d0abbaf7db77",
"text": "UAV has been a popular piece of equipment both in military and civilian applications. Groups of UAVs can form an UAV network and accomplish complicated missions such as rescue, searching, patrolling and mapping. One of the most active areas of research in UAV networks is that of area coverage problem which is usually defined as a problem of how well the UAV networks are able to monitor the given space, and how well the UAVs inside a network are able to cooperate with each other. Area coverage problem in cooperative UAV networks is the very base of many applications. In this paper, we take a representative survey of the current work that has been done about this problem via discussion of different classifications. This study serves as an overview of area coverage problem, and give some inspiration to related researchers.",
"title": ""
},
{
"docid": "a9e26514ffc78c1018e00c63296b9584",
"text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.",
"title": ""
},
{
"docid": "42d79800699b372489ad6c95ac91b21c",
"text": "Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems. Recommender systems, industrial plants and language models are only some of the many real-world tasks involving large numbers of discrete actions for which current methods can be difficult or even impossible to apply. An ability to generalize over the set of actions as well as sub-linear complexity relative to the size of the set are both necessary to handle such tasks. Current approaches are not able to provide both of these, which motivates the work in this paper. Our proposed approach leverages prior information about the actions to embed them in a continuous space upon which it can generalize. Additionally, approximate nearest-neighbor methods allow for logarithmic-time lookup complexity relative to the number of actions, which is necessary for time-wise tractable training. This combined approach allows reinforcement learning methods to be applied to large-scale learning problems previously intractable with current methods. We demonstrate our algorithm’s abilities on a series of tasks having up to one million actions.",
"title": ""
},
{
"docid": "27a583d33644887ad126e8e4844dd2e3",
"text": "In this work, we will explore different approaches used in Cross-Lingual Information Retrieval (CLIR) systems. Mainly, CLIR systems which use statistical machine translation (SMT) systems to translate queries into collection language. This will include using SMT systems as a black box or as a white box, also the SMT systems that are tuned towards better CLIR performance. After that, we will present our approach to rerank the alternative translations using machine learning regression model. This includes also introducing our set of features which we used to train the model. After that, we adapt this reranker for new languages. We also present our query expansion approach using word-embeddings model that is trained on medical data. Finally we reinvestigate translating the document collection into query language, then we present our future work.",
"title": ""
},
{
"docid": "f4f70276ef59f9b206558613c95b5a8b",
"text": "We present a general approach to creating realistic swimming behavior for a given articulated creature body. The two main components of our method are creature/fluid simulation and the optimization of the creature motion parameters. We simulate two-way coupling between the fluid and the articulated body by solving a linear system that matches acceleration at fluid/solid boundaries and that also enforces fluid incompressibility. The swimming motion of a given creature is described as a set of periodic functions, one for each joint degree of freedom. We optimize over the space of these functions in order to find a motion that causes the creature to swim straight and stay within a given energy budget. Our creatures can perform path following by first training appropriate turning maneuvers through offline optimization and then selecting between these motions to track the given path. We present results for a clownfish, an eel, a sea turtle, a manta ray and a frog, and in each case the resulting motion is a good match to the real-world animals. We also demonstrate a plausible swimming gait for a fictional creature that has no real-world counterpart.",
"title": ""
},
{
"docid": "e2f69fd023cfe69432459e8a82d4c79a",
"text": "Thresholding is one of the popular and fundamental techniques for conducting image segmentation. Many thresholding techniques have been proposed in the literature. Among them, the minimum cross entropy thresholding (MCET) have been widely adopted. Although the MCET method is effective in the bilevel thresholding case, it could be very time-consuming in the multilevel thresholding scenario for more complex image analysis. This paper first presents a recursive programming technique which reduces an order of magnitude for computing the MCET objective function. Then, a particle swarm optimization (PSO) algorithm is proposed for searching the near-optimal MCET thresholds. The experimental results manifest that the proposed PSO-based algorithm can derive multiple MCET thresholds which are very close to the optimal ones examined by the exhaustive search method. The convergence of the proposed method is analyzed mathematically and the results validate that the proposed method is efficient and is suited for real-time applications. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9d50be44155665f5fa2fb213c23d51f2",
"text": "A number of proposals have been put forth in recent years for the solution of Markov decision processes (MDPs) whose state (and sometimes action) spaces are factored. One recent class of methods involves linear value function approximation, where the optimal value function is assumed to be a linear combination of some set of basis functions, with the aim of finding suitable weights. While sophisticated techniques have been developed for finding the best approximation within this constrained space, few methods have been proposed for choosing a suitable basis set, or modifying it if solution quality is found wanting. We propose a general framework, and specific proposals, that address both of these questions. In particular, we examine <i>weakly coupled MDPs</i> where a number of subtasks can be viewed independently modulo resource constraints. We then describe methods for constructing a piecewise linear combination of the subtask value functions, using greedy decision tree techniques. We argue that this architecture is suitable for many types of MDPs whose combinatorics are determined largely by the existence multiple conflicting objectives.",
"title": ""
},
{
"docid": "b101ab8f2242e85ccd7948b0b3ffe9b4",
"text": "This paper describes a language-independent model for multi-class sentiment analysis using a simple neural network architecture of five layers (Embedding, Conv1D, GlobalMaxPooling and two Fully-Connected). The advantage of the proposed model is that it does not rely on language-specific features such as ontologies, dictionaries, or morphological or syntactic pre-processing. Equally important, our system does not use pre-trained word2vec embeddings which can be costly to obtain and train for some languages. In this research, we also demonstrate that oversampling can be an effective approach for correcting class imbalance in the data. We evaluate our methods on three publicly available datasets for English, German and Arabic, and the results show that our system’s performance is comparable to, or even better than, the state of the art for these datasets. We make our source-code publicly available.",
"title": ""
},
{
"docid": "ddff515d457f3985beece2fdd38ec344",
"text": "Segmentation is an important step in many perception tasks, such as object detection and recognition. We present an approach to organized point cloud segmentation and its application to plane segmentation, and euclidean clustering for tabletop object detection. The proposed approach is efficient and enables real-time plane segmentation for VGA resolution RGB-D data. Timing results are provided for indoor datasets, and applications to tabletop object segmentation and mapping with planar landmarks are discussed.",
"title": ""
},
{
"docid": "9855d5b08e46b454a519b0c245e52ccc",
"text": "Sparse matrix vector multiplication (SpMV) kernel is a key computation in linear algebra. Most iterative methods are composed of SpMV operations with BLAS1 updates. Therefore, researchers make extensive efforts to optimize the SpMV kernel in sparse linear algebra. With the appearance of OpenCL, a programming language that standardizes parallel programming across a wide variety of heterogeneous platforms, we are able to optimize the SpMV kernel on many different platforms. In this paper, we propose a new sparse matrix format, the Cocktail Format, to take advantage of the strengths of many different sparse matrix formats. Based on the Cocktail Format, we develop the clSpMV framework that is able to analyze all kinds of sparse matrices at runtime, and recommend the best representations of the given sparse matrices on different platforms. Although solutions that are portable across diverse platforms generally provide lower performance when compared to solutions that are specialized to particular platforms, our experimental results show that clSpMV can find the best representations of the input sparse matrices on both Nvidia and AMD platforms, and deliver 83% higher performance compared to the vendor optimized CUDA implementation of the proposed hybrid sparse format in [3], and 63.6% higher performance compared to the CUDA implementations of all sparse formats in [3].",
"title": ""
}
] |
scidocsrr
|
43644b27fb99f385f7d3d3f8b026deff
|
A novel segmentation algorithm for MRI brain tumor images
|
[
{
"docid": "4e20e28f7da8c76a6868ed7167a49c1b",
"text": "Nature enthused algorithms are the most potent for optimization. Cuckoo Search (CS) algorithm is one such algorithm which is efficient in solving optimization problems in varied fields. This paper appraises the basic concepts of cuckoo search algorithm and its application towards the segmentation of brain tumor from the Magnetic Resonance Images (MRI). The human brain is the most complex structure where identifying the tumor like diseases are extremely challenging because differentiating the components of the brain is complex. The tumor may sometimes occur with the same intensity of normal tissues. The tumor, edema, blood clot and some part of brain tissues appear as same and make the work of the radiologist more complex. In general the brain tumor is detected by radiologist through a comprehensive analysis of MR images, which takes substantially a longer time. The key inventiveness is to develop a diagnostic system using the best optimization technique called the cuckoo search, that would assist the radiologist to have a second opinion regarding the presence or absence of tumor. This paper explores the CS algorithm, performing a profound study of its search mechanisms to discover how it is efficient in detecting tumors and compare the results with the other commonly used optimization algorithms.",
"title": ""
},
{
"docid": "a0b4cc2c6f68cde8accfd35e2cb7128c",
"text": "Detection, diagnosis and evaluation of Brain tumour is an important task in recent days. MRI is the current technology which enables the detection, diagnosis and evaluation. The medical problems are severe if tumour is detected at the later stage. Hence diagnosis is necessary at the earliest. In this work, pulse coupled neural network is applied for enhancing the MR Images. The enhanced images are segmented and classified using back propagation networks. The Classification involves labelling the images into normal and abnormal (tumor detected). If the input MRI brain images are more in number, the physician could seek the help of this model and the network would help the physician to save time for further analysis. PCNN and BPN are less complex in nature and hence the processing of MRI brain images is very simple. The term ‘abnormal’ indicates the presence of tumour. The tumour may be benign or malignant and it needs medical support for further classification.",
"title": ""
}
] |
[
{
"docid": "94d4dd3c1b47b10a65d0c98434d495d4",
"text": "Comparisons of chromosome X and the autosomes can illuminate differences in the histories of males and females as well as shed light on the forces of natural selection. We compared the patterns of variation in these parts of the genome using two datasets that we assembled for this study that are both genomic in scale. Three independent analyses show that around the time of the dispersal of modern humans out of Africa, chromosome X experienced much more genetic drift than is expected from the pattern on the autosomes. This is not predicted by known episodes of demographic history, and we found no similar patterns associated with the dispersals into East Asia and Europe. We conclude that a sex-biased process that reduced the female effective population size, or an episode of natural selection unusually affecting chromosome X, was associated with the founding of non-African populations.",
"title": ""
},
{
"docid": "40dfe4f55e2afe289bfe8a540356ef89",
"text": "We explore the Tully-Fisher relation over five decades in stellar mass in galaxies with circular velocities ranging over 30 less, similarVc less, similar300 km s-1. We find a clear break in the optical Tully-Fisher relation: field galaxies with Vc less, similar90 km s-1 fall below the relation defined by brighter galaxies. These faint galaxies, however, are very rich in gas; adding in the gas mass and plotting the baryonic disk mass Md=M*+Mgas in place of luminosity restores the single linear relation. The Tully-Fisher relation thus appears fundamentally to be a relation between rotation velocity and total baryonic mass of the form Md~V4c.",
"title": ""
},
{
"docid": "3f6572916ac697188be30ef798acbbff",
"text": "The vector representation of Bengali words using word2vec model (Mikolov et al. (2013)) plays an important role in Bengali sentiment classification. It is observed that the words that are from same context stay closer in the vector space of word2vec model and they are more similar than other words. In this article, a new approach of sentiment classification of Bengali comments with word2vec and Sentiment extraction of words are presented. Combining the results of word2vec word co-occurrence score with the sentiment polarity score of the words, the accuracy obtained is 75.5%.",
"title": ""
},
{
"docid": "0f122797e9102c6bab57e64176ee5e84",
"text": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"title": ""
},
{
"docid": "1f28f5efa70a6387b00e335a8cf1e1d0",
"text": "The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art.",
"title": ""
},
{
"docid": "33dcba37947e3bdb5956f7355393eea5",
"text": "Big Data and Cloud computing are the most important technologies that give the opportunity for government agencies to gain a competitive advantage and improve their organizations. On one hand, Big Data implementation requires investing a significant amount of money in hardware, software, and workforce. On the other hand, Cloud Computing offers an unlimited, scalable and on-demand pool of resources which provide the ability to adopt Big Data technology without wasting on the financial resources of the organization and make the implementation of Big Data faster and easier. The aim of this study is to conduct a systematic literature review in order to collect data to identify the benefits and challenges of Big Data on Cloud for government agencies and to make a clear understanding of how combining Big Data and Cloud Computing help to overcome some of these challenges. The last objective of this study is to identify the solutions for related challenges of Big Data. Four research questions were designed to determine the information that is related to the objectives of this study. Data is collected using literature review method and the results are deduced from there.",
"title": ""
},
{
"docid": "21956ec5eb23c53245253245812bebf6",
"text": "In pharmaceutical drug development and manufacturing, the amount and complexity of information of different types, ranging from raw experimental data to lab reports to complex mathematical models that needs to be stored, accessed, validated, manipulated, managed, and used for decision making is staggering. The information is often in different formats, used in different computer tools, making smooth interaction between these tools difficult. A common, explicit, and platform-independent vocabulary that is both machine accessible and human usable is needed to streamline the flow of information and knowledge generation. The Purdue Ontology for Pharmaceutical Engineering (POPE) was developed to address this informatics challenge. POPE models information and knowledge and includes models of phases, material properties, molecular structures, experiments, reactions, and unit operations. In Part 1, we describe the conceptual framework of POPE and in Part 2 its applications.",
"title": ""
},
{
"docid": "e79df31bd411d7c62d625a047dde61ce",
"text": "The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this article, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the-art methods, especially in low signal-to-noise ratio (SNR) settings. We also develop a comprehensive physically-motivated simulator for C-ToF cameras that can be used to evaluate various coding schemes prior to a real hardware implementation. Since most off-the-shelf C-ToF sensors use sinusoid or square functions, we develop a hardware prototype that can implement a wide range of coding functions. Using this prototype and our software simulator, we demonstrate the performance advantages of the proposed Hamiltonian coding functions in a wide range of imaging settings.",
"title": ""
},
{
"docid": "941cd6b47980ff8539b7124a48f160e5",
"text": "Question Answering for complex questions is often modelled as a graph construction or traversal task, where a solver must build or traverse a graph of facts that answer and explain a given question. This “multi-hop” inference has been shown to be extremely challenging, with few models able to aggregate more than two facts before being overwhelmed by “semantic drift”, or the tendency for long chains of facts to quickly drift off topic. This is a major barrier to current inference models, as even elementary science questions require an average of 4 to 6 facts to answer and explain. In this work we empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three freetext corpora (including study guides and Simple Wikipedia). We demonstrate semantic drift tends to be high and aggregation quality low, at between 0.04% and 3%, and highlight scenarios that maximize the likelihood of meaningfully combining information.",
"title": ""
},
{
"docid": "c347f649a6a183d7ee3f5abddfcbc2a1",
"text": "Concern has grown regarding possible harm to the social and psychological development of children and adolescents exposed to Internet pornography. Parents, academics and researchers have documented pornography from the supply side, assuming that its availability explains consumption satisfactorily. The current paper explored the user's dimension, probing whether pornography consumers differed from other Internet users, as well as the social characteristics of adolescent frequent pornography consumers. Data from a 2004 survey of a national representative sample of the adolescent population in Israel were used (n=998). Adolescent frequent users of the Internet for pornography were found to differ in many social characteristics from the group that used the Internet for information, social communication and entertainment. Weak ties to mainstream social institutions were characteristic of the former group but not of the latter. X-rated material consumers proved to be a distinct sub-group at risk of deviant behaviour.",
"title": ""
},
{
"docid": "9f005054e640c2db97995c7540fe2034",
"text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.",
"title": ""
},
{
"docid": "dc9dc86d2ff5775636fa2bc00369a110",
"text": "Using a cognitive linguistics perspective, this book provides the most comprehensive theoretical analysis of the semantics of English prepositions available. All English prepositions originally coded spatial relations between two physical entities; while retaining their original meaning, prepositions have also developed a rich set of non-spatial meanings. In this innovative study, Tyler and Evans argue that all these meanings are systematically grounded in the nature of human spatio-physical experience. The original ‘spatial scenes’ provide the foundation for the extension of meaning from the spatial to the more abstract. This analysis introduces a new methodology that distinguishes between a conventional meaning and an interpretation produced for understanding the preposition in context, as well as establishing which of several competing senses should be taken as the primary sense. Together, the methodology and framework are sufficiently articulated to generate testable predictions and allow the analysis to be applied to additional prepositions.",
"title": ""
},
{
"docid": "3b9f17e8720b4513d18a2fc3d8f54700",
"text": "The purpose of this study was to examine the effects of amino acid supplementation on muscular strength, power, and high-intensity endurance during short-term resistance training overreaching. Seventeen resistance-trained men were randomly assigned to either an amino acid (AA) or placebo (P) group and underwent 4 weeks of total-body resistance training consisting of two 2-week phases of overreaching (phase 1: 3 x 8-12 repetitions maximum [RM], 8 exercises; phase 2: 5 x 3-5 RM, 5 exercises). Muscle strength, power, and high-intensity endurance were determined before (T1) and at the end of each training week (T2-T5). One repetition maximum squat and bench press decreased at T2 in P (5.2 and 3.4 kg, respectively) but not in AA, and significant increases in 1 RM squat and bench press were observed at T3-T5 in both groups. A decrease in the ballistic bench press peak power was observed at T3 in P but not AA. The fatigue index during the 20-repetition jump squat assessment did not change in the P group at T3 and T5 (fatigue index = 18.6 and 18.3%, respectively) whereas a trend for reduction was observed in the AA group (p = 0.06) at T3 (12.8%) but not T5 (15.2%; p = 0.12). These results indicate that the initial impact of high-volume resistance training overreaching reduces muscle strength and power, and it appears that these reductions are attenuated with amino acid supplementation. In addition, an initial high-volume, moderate-intensity phase of overreaching followed by a higher intensity, moderate-volume phase appears to be very effective for enhancing muscle strength in resistance-trained men.",
"title": ""
},
{
"docid": "2354b0d44c4ce75bee5f91c7bbbe91b0",
"text": "The central role of phosphoinositide 3-kinase (PI3K) activation in tumour cell biology has prompted a sizeable effort to target PI3K and/or downstream kinases such as AKT and mammalian target of rapamycin (mTOR) in cancer. However, emerging clinical data show limited single-agent activity of inhibitors targeting PI3K, AKT or mTOR at tolerated doses. One exception is the response to PI3Kδ inhibitors in chronic lymphocytic leukaemia, where a combination of cell-intrinsic and -extrinsic activities drive efficacy. Here, we review key challenges and opportunities for the clinical development of inhibitors targeting the PI3K–AKT–mTOR pathway. Through a greater focus on patient selection, increased understanding of immune modulation and strategic application of rational combinations, it should be possible to realize the potential of this promising class of targeted anticancer agents.",
"title": ""
},
{
"docid": "5493c7807a0f92f4c5d781b0fe1d0336",
"text": "Gait has been shown to be an efficient biometric feature for human identification at a distance from a camera. However, performance of gait recognition can be affected by various problems. One of the serious problems is view change which can be caused by change of walking direction and/or change of camera viewpoint. This leads to a consequent difficulty of across-view gait recognition where probe and gallery gaits are captured from different views. In this study, a novel method is proposed to solve the above difficulty based on View Transformation Model (VTM) under an uncalibrated single camera system. VTM is constructed based on regression processes by adopting Multi-Layer Perceptron (MLP) as a regression tool. VTM smoothly estimates gait feature from one view using motion information in a well selected Region of Interest (ROI) on gait feature from another view. Thus, pre-trained VTMs can normalize gait features from across views into a same view before gait similarity measurement is carried out. Comprehensively, the proposed method is logically extended for multi-view gait recognition where gallery gaits from multiple views are used to recognize probe gait from single view. This is addressed using multi-view to one-view transformation where VTM is now employed to estimate gait feature from single view using motion information in well selected ROIs on gait features from multiple views. The proposed method is tested on a large benchmark gait database which contains 124 subjects from 11 views. Extensive experimental results demonstrate that our method significantly outperforms other baseline methods in literature for both across-view and multi-view gait recognitions. In our experiments, particularly, average accuracies of 99%, 98% and 93% are achieved under multiple view gait recognition by using 5 cameras, 4 cameras and 3 cameras respectively.",
"title": ""
},
{
"docid": "f732f152ef61ce22fa50862531bd4996",
"text": "We empirically analyze the competitive benefits of sharing economy services to understand why people participate in the sharing economy. We employ the social exchange theory to examine the participation intention in sharing over owning. We emphasize on the importance of service platform as a trusted third party and its influence on reducing the perceived risk of sharing economy. The research model includes the key antecedents to trust and relative advantages of sharing economy services. The model will be tested with the Airbnb users’ data. The research results are expected to contribute to researchers and practitioners to understand the sharing economy.",
"title": ""
},
{
"docid": "79f691668b5e1d13cd1bfa70dfa33384",
"text": "Reported speech in the form of direct and indirect reported speech is an important indicator of evidentiality in traditional newspaper texts, but also increasingly in the new media that rely heavily on citation and quotation of previous postings, as for instance in blogs or newsgroups. This paper details the basic processing steps for reported speech analysis and reports on performance of an implementation in form of a GATE resource.",
"title": ""
},
{
"docid": "b395aa3ae750ddfd508877c30bae3a38",
"text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.",
"title": ""
},
{
"docid": "799883184a752a4f97eeb7ba474bbb8b",
"text": "This paper presents the design and implementation of a distributed virtual reality (VR) platform that was developed to support the training of multiple users who must perform complex tasks in which situation assessment and critical thinking are the primary components of success. The system is fully immersive and multimodal, and users are represented as tracked, full-body figures. The system supports the manipulation of virtual objects, allowing users to act upon the environment in a natural manner. The underlying intelligent simulation component creates an interactive, responsive world in which the consequences of such actions are presented within a realistic, time-critical scenario. The focus of this work has been on the training of medical emergency-response personnel. BioSimMER, an application of the system to training first responders to an act of bio-terrorism, has been implemented and is presented throughout the paper as a concrete example of how the underlying platform architecture supports complex training tasks. Finally, a preliminary field study was performed at the Texas Engineering Extension Service Fire Protection Training Division. The study focused on individual, rather than team, interaction with the system and was designed to gauge user acceptance of VR as a training tool. The results of this study are presented.",
"title": ""
},
{
"docid": "a1d6a739b10ec93229c33e0a8607e75e",
"text": "We present and discuss the important business problem of estimating the effect of retention efforts on the Lifetime Value of a customer in the Telecommunications industry. We discuss the components of this problem, in particular customer value and length of service (or tenure) modeling, and present a novel segment-based approach, motivated by the segment-level view marketing analysts usually employ. We then describe how we build on this approach to estimate the effects of retention on Lifetime Value. Our solution has been successfully implemented in Amdocs' Business Insight (BI) platform, and we illustrate its usefulness in real-world scenarios.",
"title": ""
}
] |
scidocsrr
|
727e761a8874835f0c0cd134cac3c058
|
Domain Adaptation for Visual Applications: A Comprehensive Survey
|
[
{
"docid": "adb64a513ab5ddd1455d93fc4b9337e6",
"text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.",
"title": ""
},
{
"docid": "ab01efad4c65bbed9e4a499844683326",
"text": "To achieve good generalization in supervised learning, the training and testing examples are usually required to be drawn from the same source distribution. In this paper we propose a method to relax this requirement in the context of logistic regression. Assuming <i>D<sup>p</sup></i> and <i>D<sup>a</sup></i> are two sets of examples drawn from two mismatched distributions, where <i>D<sup>a</sup></i> are fully labeled and <i>D<sup>p</sup></i> partially labeled, our objective is to complete the labels of <i>D<sup>p</sup>.</i> We introduce an auxiliary variable μ for each example in <i>D<sup>a</sup></i> to reflect its mismatch with <i>D<sup>p</sup>.</i> Under an appropriate constraint the μ's are estimated as a byproduct, along with the classifier. We also present an active learning approach for selecting the labeled examples in <i>D<sup>p</sup>.</i> The proposed algorithm, called \"Migratory-Logit\" or M-Logit, is demonstrated successfully on simulated as well as real data sets.",
"title": ""
}
] |
[
{
"docid": "1d50c8598a41ed7953e569116f59ae41",
"text": "Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer.",
"title": ""
},
{
"docid": "968c116ed298a1f0b9592ab0971fe562",
"text": "According to the DSM-IV (American Psychiatric Association, 1995), simple phobias consist of persistent fear of a circumscribed stimulus and consequent avoidance of that stimulus, where the person having this fear knows it is excessive or unreasonable. If the feared stimulus is heights, the person is said to suffer from acrophobia, or fear of heights. The most common and most successful treatment for acrophobia is graded exposure in-vivo. Here, the avoidance behavior is broken by exposing the patient to a hierarchy of feared stimuli, whereby the fear will first increase, after which habituation will occur and the fear will gradually diminish (Bouman, Scholing & Emmelkamp, 1992). In in-vivo treatment the patient is exposed to real stimuli. A promising alternative is graded exposure in Virtual Reality, where the patient can be treated in the safety and privacy of the therapist’s office and situations can be recreated which are hard to find or costly to reach. At this moment, research is being conducted at the Delft University of Technology and the University of Amsterdam aimed at developing a virtual reality system to be used by therapists for such VR Exposure Therapy (VRET). This article describes the results of a pilot study undertaken to explore the possibilities and characteristics of VRET and determine requirements for a system to support therapists",
"title": ""
},
{
"docid": "5f1c8902ac412d6e086145ff11601bd1",
"text": "Many school districts have developed successful intervention programs to help students graduate high school on time. However, identifying and prioritizing students who need those interventions the most remains challenging. This paper describes a machine learning framework to identify such students, discusses features that are useful for this task, applies several classification algorithms, and evaluates them using metrics important to school administrators. To help test this framework and make it practically useful, we partnered with two U.S. school districts with a combined enrollment of approximately 200,000 students. We together designed several evaluation metrics to assess the goodness of machine learning algorithms from an educator's perspective. This paper focuses on students at risk of not finishing high school on time, but our framework lays a strong foundation for future work on other adverse academic outcomes.",
"title": ""
},
{
"docid": "2bda1b1482ca7b74078b10654576b24d",
"text": "A pattern recognition pipeline consists of three stages: data pre-processing, feature extraction, and classification. Traditionally, most research effort is put into extracting appropriate features. With the advent of GPU-accelerated computing and Deep Learning, appropriate features can be discovered as part of the training process. Understanding these discovered features is important: we might be able to learn something new about the domain in which our model operates, or be comforted by the fact that the model extracts “sensible” features. This work discusses and applies methods of visualizing the features learned by Convolutional Neural Networks (CNNs). Our main contribution is an extension of an existing visualization method. The extension makes the method able to visualize the features in intermediate layers of a CNN. Most notably, we show that the features extracted in the deeper layers of a CNN trained to diagnose Diabetic Retinopathy are also the features used by human clinicians. Additionally, we published our visualization method in a software package.",
"title": ""
},
{
"docid": "f86b052520e3950a2b580323252dbfde",
"text": "In this paper, novel radial basis function-neural network (RBF-NN) models are presented for the efficient filling of the coupling matrix of the method of moments (MoM). Two RBF-NNs are trained to calculate the majority of elements in the coupling matrix. The rest of elements are calculated using the conventional MoM, hence the technique is referred to as neural network-method of moments (NN-MoM). The proposed NN-MoM is applied to the analysis of a number of microstrip patch antenna arrays. The results show that NN-MoM is both accurate and fast. The proposed technique is general and it is convenient to integrate with MoM planar solvers.",
"title": ""
},
{
"docid": "773c132b708a605039d59de52a3cf308",
"text": "BACKGROUND\nAirSeal is a novel class of valve-free insufflation system that enables a stable pneumoperitoneum with continuous smoke evacuation and carbon dioxide (CO₂) recirculation during laparoscopic surgery. Comparison data to standard CO₂ pressure pneumoperitoneum insufflators is scarce. The aim of this study is to evaluate the potential advantages of AirSeal compared to a standard CO₂ insufflator.\n\n\nMETHODS/DESIGN\nThis is a single center randomized controlled trial comparing elective laparoscopic cholecystectomy, colorectal surgery and hernia repair with AirSeal (group A) versus a standard CO₂ pressure insufflator (group S). Patients are randomized using a web-based central randomization and registration system. Primary outcome measures will be operative time and level of postoperative shoulder pain by using the visual analog score (VAS). Secondary outcomes include the evaluation of immunological values through blood tests, anesthesiological parameters, surgical side effects and length of hospital stay. Taking into account an expected dropout rate of 5%, the total number of patients is 182 (n = 91 per group). All tests will be two-sided with a confidence level of 95% (P <0.05).\n\n\nDISCUSSION\nThe duration of an operation is an important factor in reducing the patient's exposure to CO₂ pneumoperitoneum and its adverse consequences. This trial will help to evaluate if the announced advantages of AirSeal, such as clear sight of the operative site and an exceptionally stable working environment, will facilitate the course of selected procedures and influence operation time and patients clinical outcome.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01740011, registered 23 November 2012.",
"title": ""
},
{
"docid": "018d05daa52fb79c17519f29f31026d7",
"text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.",
"title": ""
},
{
"docid": "fb1d84d15fd4a531a3a81c254ad3cab0",
"text": "Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-the-art performance on multiple datasets.",
"title": ""
},
{
"docid": "6973231128048ac2ca5bce0121bf6d95",
"text": "PURPOSE\nThe aim of this study is to analyse the grip force distribution for different prosthetic hand designs and the human hand fulfilling a functional task.\n\n\nMETHOD\nA cylindrical object is held with a power grasp and the contact forces are measured at 20 defined positions. The distributions of contact forces in standard electric prostheses, in a experimental prosthesis with an adaptive grasp, and in human hands as a reference are analysed and compared. Additionally, the joint torques are calculated and compared.\n\n\nRESULTS\nContact forces of up to 24.7 N are applied by the middle and distal phalanges of the index finger, middle finger, and thumb of standard prosthetic hands, whereas forces of up to 3.8 N are measured for human hands. The maximum contact forces measured in a prosthetic hand with an adaptive grasp are 4.7 N. The joint torques of human hands and the adaptive prosthesis are comparable.\n\n\nCONCLUSIONS\nThe analysis of grip force distribution is proposed as an additional parameter to rate the performance of different prosthetic hand designs.",
"title": ""
},
{
"docid": "6d5e80293931396556cf5fbe64e9c2d2",
"text": "Rotors of electrical high speed machines are subject to high stress, limiting the rated power of the machines. This paper describes the design process of a high-speed rotor of a Permanent Magnet Synchronous Machine (PMSM) for a rated power of 10kW at 100,000 rpm. Therefore, at the initial design the impact of the rotor radius to critical parameters is analyzed analytically. In particular, critical parameters are mechanical stress due to high centrifugal forces and natural bending frequencies. Furthermore, air friction losses, heating the rotor and the stator additionally, are no longer negligible compared to conventional machines and must be considered in the design process. These mechanical attributes are controversial to the electromagnetic design, increasing the effective magnetic air gap, for example. Thus, investigations are performed to achieve sufficient mechanical strength without a significant reduction of air gap flux density or causing thermal problems. After initial design by means of analytical estimations, an optimization of rotor geometry and materials is performed by means of the finite element method (FEM).",
"title": ""
},
{
"docid": "0dffcc343fb6b98a6724da7e39248b52",
"text": "Digital maps have become a part of our everyday lives as they are integrated into a wide range of map-based services like traffic estimation, navigation systems, and many more. These services still have a huge opportunity for enhancements with semantically richer maps to support a large class of new services. In this demo, we demonstrate the MAP++ crowd-sensing system for map semantics identification. Map++ leverages standard smart-phone sensors to automatically enrich digital maps with various road semantics such as bridges, crossroads, roundabouts, underpasses, among others. The goal of this demo is to showcase Map++ in action where it takes crowd-sensed motion traces and process them automatically in real-time to identify the map semantics. The demo also allows attendees to analyze the effect of the Map++ different parameters on system performance and road semantics.",
"title": ""
},
{
"docid": "cd338aee8e141212a1548431766df498",
"text": "In recent years, contextual models that exploit maps have been shown to be very effective for many recognition and localization tasks. In this paper we propose to exploit aerial images in order to enhance freely available world maps. Towards this goal, we make use of OpenStreetMap and formulate the problem as the one of inference in a Markov random field parameterized in terms of the location of the road-segment centerlines as well as their width. This parameterization enables very efficient inference and returns only topologically correct roads. In particular, we can segment all OSM roads in the whole world in a single day using a small cluster of 10 computers. Importantly, our approach generalizes very well, it can be trained using only 1.5 km2 aerial imagery and produce very accurate results in any location across the globe. We demonstrate the effectiveness of our approach outperforming the state-of-the-art in two new benchmarks that we collect. We then show how our enhanced maps are beneficial for semantic segmentation of ground images.",
"title": ""
},
{
"docid": "3334f8059348b577a88162e7ade2540a",
"text": "Side branch occlusion, which was one of the common complications in percutaneous coronary interventions, was closely associated with cardiac death and myocardial infarction. Clinical guidelines also support the importance of preservation of physiologic blood flow in SB during PCI to bifurcation lesions. In order to avoid side branch occlusion during stent implantation, we often performed the jailed wire technique, in which a conventional guide wire was inserted to the side branch before stent implantation to the main vessel. However, the jailed wire technique could not always prevent side branch occlusion. In this case report, we described a case of 72-year-old male suffering from angina pectoris. Coronary angiography revealed the diffuse calcified stenosis in the proximal and middle of left anterior descending coronary artery, and the large diagonal branch originated from the middle of the stenosis. To prevent side branch occlusion, we performed a novel side branch protection technique by using the Corsair microcatheter (Asahi Intecc, Nagoya, Japan). In this case report, we illustrated this \"Jailed Corsair technique\", and discussed the advantage compared to other side branch protection techniques such as the jailed balloon technique.",
"title": ""
},
{
"docid": "dc76c7e939d26a6a81a8eb891b5824b7",
"text": "While deeper and wider neural networks are actively pushing the performance limits of various computer vision and machine learning tasks, they often require large sets of labeled data for effective training and suffer from extremely high computational complexity. In this paper, we will develop a new framework for training deep neural networks on datasets with limited labeled samples using cross-network knowledge projection which is able to improve the network performance while reducing the overall computational complexity significantly. Specifically, a large pre-trained teacher network is used to observe samples from the training data. A projection matrix is learned to project this teacher-level knowledge and its visual representations from an intermediate layer of the teacher network to an intermediate layer of a thinner and faster student network to guide and regulate its training process. Both the intermediate layers from the teacher network and the injection layers from the student network are adaptively selected during training by evaluating a joint loss function in an iterative manner. This knowledge projection framework allows us to use crucial knowledge learned by large networks to guide the training of thinner student networks, avoiding over-fitting, achieving better network performance, and significantly reducing the complexity. Extensive experimental results on benchmark datasets have demonstrated that our proposed knowledge projection approach outperforms existing methods, improving accuracy by up to 4% while reducing network complexity by 4 to 10 times, which is very attractive for practical applications of deep neural networks.",
"title": ""
},
{
"docid": "e3eb4019846f9add4e464462e1065119",
"text": "The internet – specifically its graphic interface, the world wide web – has had a major impact on all levels of (information) societies throughout the world. Specifically for journalism as it is practiced online, we can now identify the effect that this has had on the profession and its culture(s). This article defines four particular types of online journalism and discusses them in terms of key characteristics of online publishing – hypertextuality, interactivity, multimediality – and considers the current and potential impacts that these online journalisms can have on the ways in which one can define journalism as it functions in elective democracies worldwide. It is argued that the application of particular online characteristics not only has consequences for the type of journalism produced on the web, but that these characteristics and online journalisms indeed connect to broader and more profound changes and redefinitions of professional journalism and its (news) culture as a whole.",
"title": ""
},
{
"docid": "0d9340dc849332af5854380fa460cfd5",
"text": "Many scientific datasets archive a large number of variables over time. These timeseries data streams typically track many variables over relatively long periods of time, and therefore are often both wide and deep. In this paper, we describe the Visual Query Language (VQL) [3], a technology for locating time series patterns in historical or real time data. The user interactively specifies a search pattern, VQL finds similar shapes, and returns a ranked list of matches. VQL supports both univariate and multivariate queries, and allows the user to interactively specify the the quality of the match, including temporal warping, amplitude warping, and temporal constraints between features.",
"title": ""
},
{
"docid": "9cb2f99aa1c745346999179132df3854",
"text": "As a complementary and alternative medicine in medical field, traditional Chinese medicine (TCM) has drawn great attention in the domestic field and overseas. In practice, TCM provides a quite distinct methodology to patient diagnosis and treatment compared to western medicine (WM). Syndrome (ZHENG or pattern) is differentiated by a set of symptoms and signs examined from an individual by four main diagnostic methods: inspection, auscultation and olfaction, interrogation, and palpation which reflects the pathological and physiological changes of disease occurrence and development. Patient classification is to divide patients into several classes based on different criteria. In this paper, from the machine learning perspective, a survey on patient classification issue will be summarized on three major aspects of TCM: sign classification, syndrome differentiation, and disease classification. With the consideration of different diagnostic data analyzed by different computational methods, we present the overview for four subfields of TCM diagnosis, respectively. For each subfield, we design a rectangular reference list with applications in the horizontal direction and machine learning algorithms in the longitudinal direction. According to the current development of objective TCM diagnosis for patient classification, a discussion of the research issues around machine learning techniques with applications to TCM diagnosis is given to facilitate the further research for TCM patient classification.",
"title": ""
},
{
"docid": "7a9ece9a4043722a6f45eb87793501c2",
"text": "Shared-nothing architecture has been widely used in distributed databases to achieve good scalability. While it offers superior performance for local transactions, the overhead of processing distributed transactions can degrade the system performance significantly. The key contributor to the degradation is the expensive two-phase commit (2PC) protocol used to ensure atomic commitment of distributed transactions. In this paper, we propose a transaction management scheme called LEAP to avoid the 2PC protocol within distributed transaction processing. Instead of processing a distributed transaction across multiple nodes, LEAP converts the distributed transaction into a local transaction. This benefits the processing locality and facilitates adaptive data repartitioning when there is a change in data access pattern. Based on LEAP, we develop an online transaction processing (OLTP) system, L-Store, and compare it with the state-of-the-art distributed in-memory OLTP system, H-Store, which relies on the 2PC protocol for distributed transaction processing, and H^L-Store, a H-Store that has been modified to make use of LEAP. Results of an extensive experimental evaluation show that our LEAP-based engines are superior over H-Store by a wide margin, especially for workloads that exhibit locality-based data accesses.",
"title": ""
},
{
"docid": "002acd845aa9776840dfe9e8755d7732",
"text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.",
"title": ""
},
{
"docid": "d24ca3024b5abc27f6eb2ad5698a320b",
"text": "Purpose. To study the fracture behavior of the major habit faces of paracetamol single crystals using microindentation techniques and to correlate this with crystal structure and molecular packing. Methods. Vicker's microindentation techniques were used to measure the hardness and crack lengths. The development of all the major radial cracks was analyzed using the Laugier relationship and fracture toughness values evaluated. Results. Paracetamol single crystals showed severe cracking and fracture around all Vicker's indentations with a limited zone of plastic deformation close to the indent. This is consistent with the material being a highly brittle solid that deforms principally by elastic deformation to fracture rather than by plastic flow. Fracture was associated predominantly with the (010) cleavage plane, but was also observed parallel to other lattice planes including (110), (210) and (100). The cleavage plane (010) had the lowest fracture toughness value, Kc = 0.041MPa m1/2, while the greatest value, Kc = 0.105MPa m1/2; was obtained for the (210) plane. Conclusions. Paracetamol crystals showed severe cracking and fracture because of the highly brittle nature of the material. The fracture behavior could be explained on the basis of the molecular packing arrangement and the calculated attachment energies across the fracture planes.",
"title": ""
}
] |
scidocsrr
|
25e6f231a5f03f65cbded25d59a4caf0
|
Set-based Similarity Search for Time Series
|
[
{
"docid": "c1a8e30586aad77395e429556545675c",
"text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.",
"title": ""
}
] |
[
{
"docid": "fe3570c283fbf8b1f504e7bf4c2703a8",
"text": "We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks.",
"title": ""
},
{
"docid": "b0ea2ca170a8d0bcf4bd5dc8311c6201",
"text": "A cascade of sigma-delta modulator stages that employ a feedforward architecture to reduce the signal ranges required at the integrator inputs and outputs has been used to implement a broadband, high-resolution oversampling CMOS analog-to-digital converter capable of operating from low-supply voltages. An experimental prototype of the proposed architecture has been integrated in a 0.25-/spl mu/m CMOS technology and operates from an analog supply of only 1.2 V. At a sampling rate of 40 MSamples/sec, it achieves a dynamic range of 96 dB for a 1.25-MHz signal bandwidth. The analog power dissipation is 44 mW.",
"title": ""
},
{
"docid": "ac27282865ee11b058f71eef41851ad4",
"text": "Oral herpes virus infections (OHVIs) are among the most common mucosal disorders encountered by oral health care providers. These infections can affect individuals at any age, from infants to the elderly, and may cause significant pain and dysfunction. Immunosuppressed patients may be at increased risk for serious and potential life-threatening complications caused by OHVIs. Clinicians may have difficulty in diagnosing these infections because they can mimic other conditions of the oral mucosa. This article provides oral health care providers with clinically relevant information regarding etiopathogenesis, diagnosis, and management of OHVIs.",
"title": ""
},
{
"docid": "cfc72641905ede8ad4e33e6c74354899",
"text": "Multinomial Naive Bayes with Expectation Maximization (MNB-EM) is a standard semi-supervised learning method to augment Multinomial Naive Bayes (MNB) for text classification. Despite its success, MNB-EM is not stable, and may succeed or fail to improve MNB. We believe that this is because MNB-EM lacks the ability to preserve the class distribution on words. In this paper, we propose a novel method to augment MNBEM by leveraging the word-level statistical constraint to preserve the class distribution on words. The word-level statistical constraints are further converted to constraints on document posteriors generated by MNB-EM. Experiments demonstrate that our method can consistently improve MNBEM, and outperforms state-of-art baselines remarkably.",
"title": ""
},
{
"docid": "6c23ecee9a0861ee0c7d3dd7a8c32614",
"text": "This paper presents a broadband frequency doubler chip working in the WR-03 band (220–325 GHz). The chip is implemented in a 130-nm SiGe BiCMOS technology with an of 250/300 GHz. It consists of an integrated high-gain wideband amplifier to drive the frequency doubler. The doubler is based on a cascode push-push topology. Conversion loss of the doubler is reduced by utilizing an inductive feedback in the common-base stage. A very wideband operation of the doubler is achieved using optimally sized transistors and 4-reactance based input matching network. On-wafer measurement of the chip shows a state-of-the-art 17.4 dB peak conversion gain at 270 GHz. It delivers a maximum output power of almost 1 mW with a 3-dB bandwidth ranging from 257 GHz to 307 GHz, which is the highest bandwidth for Si-based frequency doublers working entirely in the WR-03 band. The chip consumes around 429 mW from a supply voltage of 3.3 V.",
"title": ""
},
{
"docid": "676593ce8a3be454a276b23e4fce331b",
"text": "In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality.",
"title": ""
},
{
"docid": "881de4b66bdba0a45caaa48a13b33388",
"text": "This paper describes a Di e-Hellman based encryption scheme, DHAES. The scheme is as e cient as ElGamal encryption, but has stronger security properties. Furthermore, these security properties are proven to hold under appropriate assumptions on the underlying primitive. We show that DHAES has not only the \\basic\" property of secure encryption (namely privacy under a chosen-plaintext attack) but also achieves privacy under both non-adaptive and adaptive chosenciphertext attacks. (And hence it also achieves non-malleability.) DHAES is built in a generic way from lower-level primitives: a symmetric encryption scheme, a message authentication code, group operations in an arbitrary group, and a cryptographic hash function. In particular, the underlying group may be an elliptic-curve group or the multiplicative group of integers modulo a prime number. The proofs of security are based on appropriate assumptions about the hardness of the Di e-Hellman problem and the assumption that the underlying symmetric primitives are secure. The assumptions are all standard in the sense that no random oracles are involved. We suggest that DHAES provides an attractive starting point for developing public-key encryption standards based on the Di e-Hellman assumption.",
"title": ""
},
{
"docid": "0e232a1478ede33de356f9fcfb9a554e",
"text": "Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.",
"title": ""
},
{
"docid": "38d9a18ba942e401c3d0638f88bc948c",
"text": "The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations.",
"title": ""
},
{
"docid": "473d8cbcd597c961819c5be6ab2e658e",
"text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.",
"title": ""
},
{
"docid": "f8cc1cf257711c83464a98b3d9167c94",
"text": "A Software Repository is a collection of library files and function codes. Programmers and Engineers design develop and build software libraries in a continuous process. Selecting suitable function code from one among many in the repository is quite challenging and cumbersome as we need to analyze semantic issues in function codes or components. Clustering and Mining Software Components for efficient reuse is the current topic of interest among researchers in Software Reuse Engineering and Information Retrieval. A relatively less research work is contributed in this field and has a good scope in the future. In this paper, the main idea is to cluster the software components and form a subset of libraries from the available repository. These clusters thus help in choosing the required component with high cohesion and low coupling quickly and efficiently. We define a similarity function and use the same for the process of clustering the software components and for estimating the cost of new project. The approach carried out is a feature vector based approach. © 2014 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the organizers of ITQM 2014",
"title": ""
},
{
"docid": "e32f77e31a452ae6866652ce69c5faaa",
"text": "The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presented which outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance. We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.",
"title": ""
},
{
"docid": "1796abfceaa17dad2e0d4150a8c8a8f3",
"text": "A novel eight-band LTE/WWAN frequency reconfigurable antenna for tablet computer applications is proposed in this communication. With a small dimension of 40 × 12 × 4 mm3, the proposed antenna comprises a loop feeding strip and a shorting strip in which a single-pole four-throw RF switch is embedded. The RF switch is used to change the resonant modes of lower band among four different working states, so that the antenna can provide a multiband operation of LTE700/GSM850 /900/1800/1900/UMTS2100/LTE2300/2500 with return loss better than 6 dB. Reasonably good radiating efficiency and antenna gain are also achieved for the practical tablet computer.",
"title": ""
},
{
"docid": "bfdfd911e913c4dbe7a01e775ae6f5bf",
"text": "With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects.",
"title": ""
},
{
"docid": "102847880600a607db58acb53fa22f0b",
"text": "PURPOSE\nElectronic communications technologies are affording children and adolescents new means of bullying one another. Referred to as electronic bullying, cyberbullying, or online social cruelty, this phenomenon includes bullying through e-mail, instant messaging, in a chat room, on a website, or through digital messages or images sent to a cell phone. The present study examined the prevalence of electronic bullying among middle school students.\n\n\nMETHODS\nA total of 3,767 middle school students in grades 6, 7, and 8 who attend six elementary and middle schools in the southeastern and northwestern United States completed a questionnaire, consisting of the Olweus Bully/Victim Questionnaire and 23 questions developed for this study that examined participants' experiences with electronic bullying, as both victims and perpetrators.\n\n\nRESULTS\nOf the students, 11% that they had been electronically bullied at least once in the last couple of months (victims only); 7% indicated that they were bully/victims; and 4% had electronically bullied someone else at least once in the previous couple of months (bullies only). The most common methods for electronic bullying (as reported by both victims and perpetrators) involved the use of instant messaging, chat rooms, and e-mail. Importantly, close to half of the electronic bully victims reported not knowing the perpetrator's identity.\n\n\nCONCLUSIONS\nElectronic bullying represents a problem of significant magnitude. As children's use of electronic communications technologies is unlikely to wane in coming years, continued attention to electronic bullying is critical. Implications of these findings for youth, parents, and educators are discussed.",
"title": ""
},
{
"docid": "f6e90401ea52689801b164ef8167814c",
"text": "In this paper, we develop novel, efficient 2D encodings for 3D geometry, which enable reconstructing full 3D shapes from a single image at high resolution. The key idea is to pose 3D shape reconstruction as a 2D prediction problem. To that end, we first develop a simple baseline network that predicts entire voxel tubes at each pixel of a reference view. By leveraging well-proven architectures for 2D pixel-prediction tasks, we attain state-of-the-art results, clearly outperforming purely voxel-based approaches. We scale this baseline to higher resolutions by proposing a memory-efficient shape encoding, which recursively decomposes a 3D shape into nested shape layers, similar to the pieces of a Matryoshka doll. This allows reconstructing highly detailed shapes with complex topology, as demonstrated in extensive experiments; we clearly outperform previous octree-based approaches despite having a much simpler architecture using standard network components. Our Matryoshka networks further enable reconstructing shapes from IDs or shape similarity, as well as shape sampling.",
"title": ""
},
{
"docid": "30e0918ec670bdab298f4f5bb59c3612",
"text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.",
"title": ""
},
{
"docid": "ca905aef2477905783f7d18be841f99b",
"text": "PURPOSE\nHumans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit.\n\n\nMETHODS\nIn experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field.\n\n\nRESULTS\nPursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.",
"title": ""
},
{
"docid": "c04a00cfe6b401087e675fbaae76bfff",
"text": "Text analytics and sentiment analysis can help an organization derive potentially valuable business insights from text-based content such as word documents, email and postings on social media streams like Facebook, Twitter and LinkedIn. The system described here analyses opinions about various gadgets collected from two different sources and in two different forms; online reviews and Twitter posts (tweets). Sentiment analysis can be applied to online reviews in easier and more detailed way than to the tweets. Namely, online reviews are written in clear and grammatically more accurate form, while in tweets, internet slang, sarcasm and allegory are often used. System described here explains methods of data collection, sentiment analysis process for online reviews and tweets using KNIME, gives an overview of differences and analysis possibilities in sentiment analysis for both data sources.",
"title": ""
},
{
"docid": "1d58926e54db9412921284ad2c18a324",
"text": "Automatic detection of tumors in medical images is motivated by the necessity of high accuracy when dealing with a human life. Also, the computer assistance is demanded in medical institutions due to the fact that it could improve the results of humans in such a domain where the false negative cases must be at a very low rate. Processing of Magnetic Resonance Imaging (MRI) images is one of the techniques to diagnose the brain tumor. This paper describes the strategy to detect and extract brain tumor from patient’s MRI scan images. First it takes the name and age of a person and then MRI brain image is used for tumor detection process. It includes pre-processing, segmentation, morphological operation, watershed segmentation and calculation of the tumor area and determination of the tumor location. The user interface of the application is developed using Graphical User Interface (GUI) developing environment of Matrix Laboratory (MATLAB).",
"title": ""
}
] |
scidocsrr
|
52d09d2ef097bcd59715fe6d26aed637
|
Nonlocal Image Restoration With Bilateral Variance Estimation: A Low-Rank Approach
|
[
{
"docid": "4720a84220e37eca1d0c75697f247b23",
"text": "We describe a form of nonlinear decomposition that is well-suited for efficient encoding of natural signals. Signals are initially decomposed using a bank of linear filters. Each filter response is then rectified and divided by a weighted sum of rectified responses of neighboring filters. We show that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively. These results suggest that nonlinear response properties of sensory neurons are not an accident of biological implementation, but have an important functional role.",
"title": ""
}
] |
[
{
"docid": "21b04c71f6c87b18f544f6b3f6570dd7",
"text": "Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience in developing expert systems.<<ETX>>",
"title": ""
},
{
"docid": "6bbcbe9f4f4ede20d2b86f6da9167110",
"text": "Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.",
"title": ""
},
{
"docid": "cbded803de971279c145e509d38f195f",
"text": "This study deals with a normative concept of participatory development approach, which originates in the developed world. In particular, it analyses and explains the limitations to the participatory tourism development approach in the context of developing countries. It was found that there are operational, structural and cultural limits to community participation in the TDP in many developing countries although they do not equally exist in every tourist destination. Moreover, while these limits tend to exhibit higher intensity and greater persistence in the developing world than in the developed world, they appear to be a re#ection of prevailing socio-political, economic and cultural structure in many developing countries. On the other hand, it was also found that although these limitations may vary over time according to types, scale and levels of tourism development, the market served, and cultural attributes of local communities, forms and scale of tourism developed are beyond the control of local communities. It concludes that formulating and implementing the participatory tourism development approach requires a total change in sociopolitical, legal, administrative and economic structure of many developing countries, for which hard political choices and logical decisions based on cumbersome social, economic and environmental trade-o!s are sine qua non alongside deliberate help, collaboration and co-operation of major international donor agencies, NGOs, international tour operators and multinational companies. ( 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "09085472d12ed72d5c0fe27b5eb5e175",
"text": "BACKGROUND\nUse of exergames can complement conventional therapy and increase the amount and intensity of visuospatial neglect (VSN) training. A series of 9 exergames-games based on therapeutic principles-aimed at improving exploration of the neglected space for patients with VSN symptoms poststroke was developed and tested for its feasibility.\n\n\nOBJECTIVES\nThe goal was to determine the feasibility of the exergames with minimal supervision in terms of (1) implementation of the intervention, including adherence, attrition and safety, and (2) limited efficacy testing, aiming to document possible effects on VSN symptoms in a case series of patients early poststroke.\n\n\nMETHODS\nA total of 7 patients attended the 3-week exergames training program on a daily basis. Adherence of the patients was documented in a training diary. For attrition, the number of participants lost during the intervention was registered. Any adverse events related to the exergames intervention were noted to document safety. Changes in cognitive and spatial exploration skills were measured with the Zürich Maxi Mental Status Inventory and the Neglect Test. Additionally, we developed an Eye Tracker Neglect Test (ETNT) using an infrared camera to detect and measure neglect symptoms pre- and postintervention.\n\n\nRESULTS\nThe median was 14 out of 15 (93%) attended sessions, indicating that the adherence to the exergames training sessions was high. There were no adverse events and no drop-outs during the exergame intervention. The individual cognitive and spatial exploration skills slightly improved postintervention (P=.06 to P=.98) and continued improving at follow-up (P=.04 to P=.92) in 5 out of 7 (71%) patients. Calibration of the ETNT was rather error prone. The ETNT showed a trend for a slight median group improvement from 15 to 16 total located targets (+6%).\n\n\nCONCLUSIONS\nThe high adherence rate and absence of adverse events showed that these exergames were feasible and safe for the participants. The results of the amount of exergames use is promising for future applications and warrants further investigations-for example, in the home setting of patients to augment training frequency and intensity. The preliminary results indicate the potential of these exergames to cause improvements in cognitive and spatial exploration skills over the course of training for stroke patients with VSN symptoms. Thus, these exergames are proposed as a motivating training tool to complement usual care. The ETNT showed to be a promising assessment for quantifying spatial exploration skills. However, further adaptations are needed, especially regarding calibration issues, before its use can be justified in a larger study sample.",
"title": ""
},
{
"docid": "d7cf8f20a9c061ef3bc661188efa6440",
"text": "Automatic classification of colon into normal and malignant classes is complex due to numerous factors including similar colors in different biological constituents of histopathological imagery. Therefore, such techniques, which exploit the textural and geometric properties of constituents of colon tissues, are desired. In this paper, a novel feature extraction strategy that mathematically models the geometric characteristics of constituents of colon tissues is proposed. In this study, we also show that the hybrid feature space encompassing diverse knowledge about the tissues׳ characteristics is quite promising for classification of colon biopsy images. This paper thus presents a hybrid feature space based colon classification (HFS-CC) technique, which utilizes hybrid features for differentiating normal and malignant colon samples. The hybrid feature space is formed to provide the classifier different types of discriminative features such as features having rich information about geometric structure and image texture. Along with the proposed geometric features, a few conventional features such as morphological, texture, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) are also used to develop a hybrid feature set. The SIFT features are reduced using minimum redundancy and maximum relevancy (mRMR). Various kernels of support vector machines (SVM) are employed as classifiers, and their performance is analyzed on 174 colon biopsy images. The proposed geometric features have achieved an accuracy of 92.62%, thereby showing their effectiveness. Moreover, the proposed HFS-CC technique achieves 98.07% testing and 99.18% training accuracy. The better performance of HFS-CC is largely due to the discerning ability of the proposed geometric features and the developed hybrid feature space.",
"title": ""
},
{
"docid": "732433b4cc1d9a3fcf10339e53eb3ab8",
"text": "Humans and mammals possess their own feet. Using the mobility of their feet, they are able to walk in various environments such as plain land, desert, swamp, and so on. Previously developed biped robots and four-legged robots did not employ such adaptable foot. In this work, a biomimetic foot mechanism is investigated through analysis of the foot structure of the human-being. This foot mechanism consists of a toe, an ankle, a heel, and springs replacing the foot muscles and tendons. Using five toes and springs, this foot can adapt to various environments. A mathematical modeling for this foot mechanism was performed and its characteristics were observed through numerical simulation.",
"title": ""
},
{
"docid": "dcee61dad66f59b2450a3e154726d6b1",
"text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.",
"title": ""
},
{
"docid": "37b3447959579cf5cf5e617417e3b575",
"text": "BACKGROUND\nPosttraumatic osteoarthritis (PTOA) after anterior cruciate ligament (ACL) reconstruction ultimately translates into a large economic effect on the health care system owing to the young ages of this population. Purpose/Hypothesis: The purposes were to perform a meta-analysis to determine the prevalence of osteoarthritis after an ACL reconstruction, examining the effects of length of time after surgery, preoperative time interval from injury to surgery, and patient age at the time of surgery. It was hypothesized that the prevalence of PTOA increased with time from surgery and that increased time from injury to surgery and age were also risk factors for the development of PTOA.\n\n\nSTUDY DESIGN\nMeta-analysis.\n\n\nMETHODS\nA meta-analysis of the prevalence of radiographic PTOA after ACL reconstruction was performed of studies with a minimum of 5 years' follow-up, with a level of evidence of 1, 2, or 3. The presence of osteoarthritis was defined according to knee radiographs evaluated with classification based on Kellgren and Lawrence, Ahlbäck, International Knee Documentation Committee, or the Osteoarthritis Research Society International. Metaregression models quantified the relationship between radiographic PTOA prevalence and the mean time from injury to surgery, mean patient age at time of surgery, and mean postoperative follow-up time.\n\n\nRESULTS\nThirty-eight studies (4108 patients) were included. Longer postsurgical follow-up time was significantly positively associated with a higher proportion of PTOA development. The model-estimated proportion of PTOA (95% CI) at 5, 10, and 20 years after surgery was 11.3% (6.4%-19.1%), 20.6% (14.9%-27.7%), and 51.6% (29.1%-73.5%), respectively. Increased chronicity of the ACL tear before surgery and increased patient age were also associated with a higher likelihood of PTOA development.\n\n\nCONCLUSION\nThe prevalence of osteoarthritis after an ACL reconstruction significantly increased with time. Longer chronicity of ACL tear and older age at the time of surgery were significantly positively correlated with the development of osteoarthritis. A timely referral and treatment of symptomatic patients are vital to diminish the occurrence of PTOA.",
"title": ""
},
{
"docid": "3130e666076d119983ac77c5d77d0aed",
"text": "of Ph.D. dissertation, University of Haifa, Israel.",
"title": ""
},
{
"docid": "a1389b49a11508c33d462d28b1b3d93e",
"text": "Glaucoma is one of the most common causes of blindness in the world. The vision lost due to glaucoma cannot be regained. Early detection of glaucoma is thus very important. The Optic Disk(OD), Optic Cup(OC) and Neuroretinal Rim(NRR) are among the important features of a retinal image that can be used in the detection of glaucoma. In this paper, a computer-assisted method for the detection of glaucoma based on the ISNT rule is presented. The OD and OC are segmented using watershed transformation. The NRR area in the ISNT quadrants is obtained from the segmented OD and OC. The method is applied on the publicly available databases HRF, Messidor, DRIONS-DB, RIM-ONE and a local hospital database consisting of both normal and glaucomatous images. The proposed method is simple, computationally efficient and achieves a sensitivity of 91.82% and an overall accuracy of 94.14%.",
"title": ""
},
{
"docid": "1c3ec7daefc72c676bcff1ad136b132c",
"text": "Based on research investigating English first-year university students, this paper examined the case made for a new generation of young learners often described as the Net Generation or Digital Natives in terms of agency and choice. Generational arguments set out a case that links young people’s attitudes and orientations to their lifelong exposure to networked and digital technologies. This paper drew on interview data from mixed methods research to suggest that the picture is more complex than the equation of exposure to new technologies and a generational change of attitudes and capacities. Starting from the position that interaction with technology is mediated by activity and an intentional stance, we examined the choices students make with regard to the technologies they engage with. We explored the perceived constraints students face and the way they either comply or resist such constraints. We concluded that agency actively shapes student engagement with technology but that an adequate conception of agency must expand beyond the person and the self to include notions of collective agency identifying the meso level as an activity system that mediates between the students and their technological setting.",
"title": ""
},
{
"docid": "7f6738aeccf7bc0e490d62e3030fdaf3",
"text": "Customer churn prediction is becoming an increasingly important business analytics problem for telecom operators. In order to increase the efficiency of customer retention campaigns, churn prediction models need to be accurate as well as compact and interpretable. Although a myriad of techniques for churn prediction has been examined, there has been little attention for the use of Bayesian Network classifiers. This paper investigates the predictive power of a number of Bayesian Network algorithms, ranging from the Naive Bayes classifier to General Bayesian Network classifiers. Furthermore, a feature selection method based on the concept of the Markov Blanket, which is genuinely related to Bayesian Networks, is tested. The performance of the classifiers is evaluated with both the Area under the Receiver Operating Characteristic Curve and the recently introduced Maximum Profit criterion. The Maximum Profit criterion performs an intelligent optimization by targeting this fraction of the customer base which would maximize the profit generated by a retention campaign. The results of the experiments are rigorously tested and indicate that most of the analyzed techniques have a comparable performance. Some methods, however, are more preferred since they lead to compact networks, which enhances the interpretability and comprehensibility of the churn prediction models.",
"title": ""
},
{
"docid": "18e1f1171844fa27905246b9246cc975",
"text": "Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topoIogica1. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. @ 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "1e6c497fe53f8cba76bd8b432c618c1f",
"text": "inputs into digital (down or up), analog (-1.0 to 1.0), and positional (touch and • mouse cursor). By building on a solid main loop you can easily add support for detecting chorded inputs and sequence inputs.",
"title": ""
},
{
"docid": "60a7e9be448a0ac4e25d1eed5b075de9",
"text": "Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.",
"title": ""
},
{
"docid": "542683765586010b828af95c7a109fdc",
"text": "This paper suggests asymmetric stator teeth design to reduce torque ripple and back EMF Total Harmonic Distortion(THD) for Interior Permanent Magnet Synchronous Machine(IPMSM). IPMSM which has 8 poles, 12 slots is analyzed in this study. From changing design parameter in stator structure, 8 comparison models are analyzed. Analysis of proposed method is carried out using Finite Element Method(FEM). Suggested method has advantage to reduce torque ripple and back electromotive force(EMF) harmonics without average torque decrease. Comparison between reference model and comparison models applying proposed method proceeds to verify advantage of this method.",
"title": ""
},
{
"docid": "fa7da02d554957f92364d4b37219feba",
"text": "This paper shows mechanisms for artificial finger based on a planetary gear system (PGS). Using the PGS as a transmitter provides an under-actuated system for driving three joints of a finger with back-drivability that is crucial characteristics for fingers as an end-effector when it interacts with external environment. This paper also shows the artificial finger employed with the originally developed mechanism called “double planetary gear system” (DPGS). The DPGS provides not only back-drivable and under-actuated flexion-extension of the three joints of a finger, which is identical to the former, but also adduction-abduction of the MP joint. Both of the above finger mechanisms are inherently safe due to being back-drivable with no electric device or sensor in the finger part. They are also rigorously solvable in kinematics and kinetics as shown in this paper.",
"title": ""
},
{
"docid": "7ca668dbbb6cc08f3eac484e8a2dae31",
"text": "At present, the prime methodology for studying neuronal circuit-connectivity, physiology and pathology under in vitro or in vivo conditions is by using substrate-integrated microelectrode arrays. Although this methodology permits simultaneous, cell-non-invasive, long-term recordings of extracellular field potentials generated by action potentials, it is 'blind' to subthreshold synaptic potentials generated by single cells. On the other hand, intracellular recordings of the full electrophysiological repertoire (subthreshold synaptic potentials, membrane oscillations and action potentials) are, at present, obtained only by sharp or patch microelectrodes. These, however, are limited to single cells at a time and for short durations. Recently a number of laboratories began to merge the advantages of extracellular microelectrode arrays and intracellular microelectrodes. This Review describes the novel approaches, identifying their strengths and limitations from the point of view of the end users--with the intention to help steer the bioengineering efforts towards the needs of brain-circuit research.",
"title": ""
},
{
"docid": "ebbe0463e44c365f10e8740f0646e338",
"text": "Mammalian visual behaviors, as well as responses in the neural systems underlying these behaviors, are driven by luminance and color contrast. With constantly improving tools for measuring activity in cell-type-specific populations in the mouse during visual behavior, it is important to define the extent of luminance and color information that is behaviorally accessible to the mouse. A non-uniform distribution of cone opsins in the mouse retina potentially complicates both luminance and color sensitivity; opposing gradients of short (UV-shifted) and middle (blue/green) cone opsins suggest that color discrimination and wavelength-specific luminance contrast sensitivity may differ with retinotopic location. Here we ask how well mice can discriminate color and wavelength-specific luminance changes across visuotopic space. We found that mice were able to discriminate color and were able to do so more broadly across visuotopic space than expected from the cone-opsin distribution. We also found wavelength-band-specific differences in luminance sensitivity.",
"title": ""
},
{
"docid": "4affe8335240844414a51355593bfbe0",
"text": "— This paper reviews and extends some recent results on the multivariate fractional Brownian motion (mfBm) and its increment process. A characterization of the mfBm through its covariance function is obtained. Similarly, the correlation and spectral analyses of the increments are investigated. On the other hand we show that (almost) all mfBm’s may be reached as the limit of partial sums of (super)linear processes. Finally, an algorithm to perfectly simulate the mfBm is presented and illustrated by some simulations. Résumé (Propriétés du mouvement brownien fractionnaire multivarié) Cet article constitue une synthèse des propriétés du mouvement brownien fractionnaire multivarié (mBfm) et de ses accroissements. Différentes caractérisations du mBfm sont présentées à partir soit de la fonction de covariance, soit de représentations intégrales. Nous étudions aussi les propriétés temporelles et spectrales du processus des accroissements. D’autre part, nous montrons que (presque) tous les mBfm peuvent être atteints comme la limite (au sens de la convergence faible) des sommes partielles de processus (super)linéaires. Enfin, un algorithme de simulation exacte est présenté et quelques simulations illustrent les propriétés du mBfm.",
"title": ""
}
] |
scidocsrr
|
2e9f4e0a2346ce8a3dfdd07b2eee5cc2
|
EncFS goes multi-user: Adding access control to an encrypted file system
|
[
{
"docid": "21d84bd9ea7896892a3e69a707b03a6a",
"text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.",
"title": ""
}
] |
[
{
"docid": "9cb7f19b08aefb98a412a2737b73707a",
"text": "Usually device compact models do not include breakdown mechanisms which are fundamental for ESD protection devices. This work proposes a novel spice-compatible modeling of breakdown phenomena for ESD diodes. The developed physics based approach includes minority carriers propagation and can be embedded in the simulation of parasitic substrate noise of power devices. The model implemented in VerilogA has been validated with device simulations for a simple structure at different temperatures showing good agreement and robust convergence.",
"title": ""
},
{
"docid": "da6a74341c8b12658aea2a267b7a0389",
"text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE",
"title": ""
},
{
"docid": "b670c8908aa2c8281b3164d7726b35d0",
"text": "We present a sketching interface for quickly and easily designing freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making wide areas fat, and narrow areas thin. Teddy, our prototype system, is implemented as a Java#8482; program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user typically masters the operations within 10 minutes, and can construct interesting 3D models within minutes.",
"title": ""
},
{
"docid": "1bb54da28e139390c2176ae244066575",
"text": "A novel non-parametric, multi-variate quickest detection method is proposed for cognitive radios (CRs) using both energy and cyclostationary features. The proposed approach can be used to track state dynamics of communication channels. This capability can be useful for both dynamic spectrum sharing (DSS) and future CRs, as in practice, centralized channel synchronization is unrealistic and the prior information of the statistics of channel usage is, in general, hard to obtain. The proposed multi-variate non-parametric average sample power and cyclostationarity-based quickest detection scheme is shown to achieve better performance compared to traditional energy-based schemes. We also develop a parallel on-line quickest detection/off-line change-point detection algorithm to achieve self-awareness of detection delays and false alarms for future automation. Compared to traditional energy-based quickest detection schemes, the proposed multi-variate non-parametric quickest detection scheme has comparable computational complexity. The simulated performance shows improvements in terms of small detection delays and significantly higher percentage of spectrum utilization.",
"title": ""
},
{
"docid": "ee8b20f685d4c025e1d113a676728359",
"text": "Two experiments were conducted to evaluate the effects of increasing concentrations of glycerol in concentrate diets on total tract digestibility, methane (CH4) emissions, growth, fatty acid profiles, and carcass traits of lambs. In both experiments, the control diet contained 57% barley grain, 14.5% wheat dried distillers grain with solubles (WDDGS), 13% sunflower hulls, 6.5% beet pulp, 6.3% alfalfa, and 3% mineral-vitamin mix. Increasing concentrations (7, 14, and 21% dietary DM) of glycerol in the dietary DM were replaced for barley grain. As glycerol was added, alfalfa meal and WDDGS were increased to maintain similar concentrations of CP and NDF among diets. In Exp.1, nutrient digestibility and CH4 emissions from 12 ram lambs were measured in a replicated 4 × 4 Latin square experiment. In Exp. 2, lamb performance was evaluated in 60 weaned lambs that were blocked by BW and randomly assigned to 1 of the 4 dietary treatments and fed to slaughter weight. In Exp. 1, nutrient digestibility and CH4 emissions were not altered (P = 0.15) by inclusion of glycerol in the diets. In Exp.2, increasing glycerol in the diet linearly decreased DMI (P < 0.01) and tended (P = 0.06) to reduce ADG, resulting in a linearly decreased final BW. Feed efficiency was not affected by glycerol inclusion in the diets. Carcass traits and total SFA or total MUFA proportions of subcutaneous fat were not affected (P = 0.77) by inclusion of glycerol, but PUFA were linearly decreased (P < 0.01). Proportions of 16:0, 10t-18:1, linoleic acid (18:2 n-6) and the n-6/n-3 ratio were linearly reduced (P < 0.01) and those of 18:0 (stearic acid), 9c-18:1 (oleic acid), linearly increased (P < 0.01) by glycerol. When included up to 21% of diet DM, glycerol did not affect nutrient digestibility or CH4 emissions of lambs fed barley based finishing diets. Glycerol may improve backfat fatty acid profiles by increasing 18:0 and 9c-18:1 and reducing 10t-18:1 and the n-6/n-3 ratio.",
"title": ""
},
{
"docid": "ce6e5532c49b02988588f2ac39724558",
"text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.",
"title": ""
},
{
"docid": "46ae0bea85996747e06c1c18bd606340",
"text": "In this paper, two accurate models for interdigital capacitors and shunt inductive stubs in coplanar-waveguide structures are presented and validated over the entire W-band frequency range. Using these models, a novel bandpass filter (BPF) and a miniaturized high-pass filter are designed and fabricated. By inserting interdigital capacitors in BPF resonators, an out-of-band transmission null is introduced, which improves rejection level up to 17 dB over standard designs of similar filters. A high-pass filter is also designed, using semilumped-element models in order to miniaturize the filter structure. It is shown that a fifth-order high-pass filter can be built with a maximum dimension of less than /spl lambda//sub g//3. Great agreement between simulated and measured responses of these filters is demonstrated.",
"title": ""
},
{
"docid": "8c0e5e48c8827a943f4586b8e75f4f9d",
"text": "Predicting the results of football matches poses an interesting challenge due to the fact that the sport is so popular and widespread. However, predicting the outcomes is also a difficult problem because of the number of factors which must be taken into account that cannot be quantitatively valued or modeled. As part of this work, a software solution has been developed in order to try and solve this problem. During the development of the system, a number of tests have been carried out in order to determine the optimal combination of features and classifiers. The results of the presented system show a satisfactory capability of prediction which is superior to the one of the reference method (most likely a priori outcome).",
"title": ""
},
{
"docid": "3a066a35f064d275d64770e28672b277",
"text": "This paper describes how to add first-class generic types---including mixins---to strongly-typed OO languages with nominal subtyping such as Java and C#. A generic type system is \"first-class\" if generic types can appear in any context where conventional types can appear. In this context, a mixin is simply a generic class that extends one of its type parameters, e.g., a class C<T> that extends T. Although mixins of this form are widely used in Cpp (via templates), they are clumsy and error-prone because Cpp treats mixins as macros, forcing each mixin instantiation to be separately compiled and type-checked. The abstraction embodied in a mixin is never separately analyzed.Our formulation of mixins using first-class genericity accommodates sound local (class-by-class) type checking. A mixin can be fully type-checked given symbol tables for each of the classes that it directly references---the same context in which Java performs incremental class compilation. To our knowledge, no previous formal analysis of first-class genericity in languages with nominal type systems has been conducted, which is surprising because nominal subtyping has become predominant in mainstream object-oriented programming languages.What makes our treatment of first-class genericity particularly interesting and important is the fact that it can be added to the existing Java language without any change to the underlying Java Virtual Machine. Moreover, the extension is backward compatible with legacy Java source and class files. Although our discussion of a practical implementation strategy focuses on Java, the same implementation techniques could be applied to other object-oriented languages such as C# or Eiffel that support incremental compilation, dynamic class loading, and nominal subtyping.",
"title": ""
},
{
"docid": "8c7af6b1aa36c5369c7e023dd84dabfd",
"text": "This paper compares various methodologies for the design of Sobel Edge Detection Algorithm on Field Programmable Gate Arrays (FPGAs). We show some characteristics to design a computer vision algorithm to suitable hardware platforms. We evaluate hardware resources and power consumption of Sobel Edge Detection on two studies: Xilinx system generator (XSG) and Vivado_HLS tools which both are very useful tools for developing computer vision algorithms. The comparison the hardware resources and power consumption among FPGA platforms (Zynq-7000 AP SoC, Spartan 3A DSP) are analyzed. The hardware resources by using Vivado_HLS on both platforms are used less 9 times with BRAM_18K, 7 times with DSP48E, 2 times with FFs, and approximately with LUTs comparing with XSG. In addition, the power consumption on Zynq-7000 AP SoC spends more 30% by using Vivado_HLS than by using XSG tool and for Spartan 3A DSP consumes a half of power comparing with by using XSG tool. In the study by using Vivado_HLS shows that power consumption depends on frequency.",
"title": ""
},
{
"docid": "510652008e21c97cb3a75fc921bf6cfc",
"text": "This study aims at extending our understanding regarding the adoption of mobile banking through integrating Technology Acceptance Model (TAM) and Theory of Planned Behavior (TPB). Analyzing survey data from 119 respondents yielded important findings that partially support research hypotheses. The results indicated a significant positive impact of attitude toward mobile banking and subjective norm on mobile banking adoption. Surprisingly, the effects of behavioral control and usefulness on mobile banking adoption were insignificant. Furthermore, the regression results indicated a significant impact of perceived usefulness on attitude toward mobile banking while the effect of perceived ease of use on attitude toward mobile banking was not supported. The paper concludes with a discussion of research results and draws several implications for future research.",
"title": ""
},
{
"docid": "511486e1b6e87efc1aeec646bb5af52b",
"text": "The present study examined the associations between pathological forms of narcissism and responses to scenarios describing private or public negative events. This was accomplished using a randomized twowave experimental design with 600 community participants. The grandiose form of pathological narcissism was associated with increased negative affect and less forgiveness for public offenses, whereas the vulnerable form of pathological narcissism was associated with increased negative affect following private negative events. Concerns about humiliation mediated the association of pathological narcissism with increased negative affect but not the association between grandiose narcissism and lack of forgiveness for public offenses. These findings suggest that pathological narcissism may promote maladaptive responses to negative events that occur in private (vulnerable narcissism) or public (gran-",
"title": ""
},
{
"docid": "55d92c6a46c491a5cc8d727536077c3c",
"text": "Given a collection of objects and an associated similarity measure, the all-pairs similarity search problem asks us to find all pairs of objects with similarity greater than a certain user-specified threshold. Locality-sensitive hashing (LSH) based methods have become a very popular approach for this problem. However, most such methods only use LSH for the first phase of similarity search i.e. efficient indexing for candidate generation. In this paper, we presentBayesLSH, a principled Bayesian algorithm for the subsequent phase of similarity search performing candidate pruning and similarity estimation using LSH. A simpler variant, BayesLSHLite, which calculates similarities exactly, is also presented. BayesLSH is able to quickly prune away a large majority of the false positive candidate pairs, leading to significant speedups over baseline approaches. For BayesLSH, we also provide probabilistic guarantees on the quality of the output, both in terms of accuracy and recall. Finally, the quality of BayesLSH’s output can be easily tuned and does not require any manual setting of the number of hashes to use for similarity estimation, unlike standard approaches. For two state-of-the-art candidate generation algorithms, AllPairs [3] and LSH, BayesLSH enables significant speedups, typically in the range 2x-20x for a wide variety of datasets.",
"title": ""
},
{
"docid": "01c267fbce494fcfabeabd38f18c19a3",
"text": "New insights in the programming physics of silicided polysilicon fuses integrated in 90 nm CMOS have led to a programming time of 100 ns, while achieving a resistance increase of 107. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained",
"title": ""
},
{
"docid": "3f5461231e7120be4fbddfd53c533a53",
"text": "OBJECTIVE\nTo develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common.\n\n\nSTUDY DESIGN\nRegression risk analysis estimates were compared with internal standards as well as with Mantel-Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR.\n\n\nDATA COLLECTION\nData sets produced using Monte Carlo simulations.\n\n\nPRINCIPAL FINDINGS\nRegression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases.\n\n\nCONCLUSIONS\nRegression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case-control studies, particularly when outcomes are common or effect size is large.",
"title": ""
},
{
"docid": "92fcc4d21872dca232c624a11eb3988c",
"text": "Most automobile manufacturers maintain many vehicle types to keep a successful position on the market. Through the further development all vehicle types gain a diverse amount of new functionality. Additional features have to be supported by the car’s software. For time efficient accomplishment, usually the existing electronic control unit (ECU) code is extended. In the majority of cases this evolutionary development process is accompanied by a constant decay of the software architecture. This effect known as software erosion leads to an increasing deviation from the requirements specifications. To counteract the erosion it is necessary to continuously restore the architecture in respect of the specification. Automobile manufacturers cope with the erosion of their ECU software with varying degree of success. Successfully we applied a methodical and structured approach of architecture restoration in the specific case of the brake servo unit (BSU). Software product lines from existing BSU variants were extracted by explicit projection of the architecture variability and decomposition of the original architecture. After initial application, this approach was capable to restore the BSU architecture recurrently.",
"title": ""
},
{
"docid": "e14801b902bad321870677c4a723ae2c",
"text": "We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach.",
"title": ""
},
{
"docid": "77731bed6cf76970e851f3b2ce467c1b",
"text": "We introduce SparkGalaxy, a big data processing toolkit that is able to encode complex data science experiments as a set of high-level workflows. SparkGalaxy combines the Spark big data processing platform and the Galaxy workflow management system to o↵er a set of tools for graph processing and machine learning using a novel interaction model for creating and using complex workflows. SparkGalaxy contributes an easy-to-use interface and scalable algorithms for data science. We demonstrate SparkGalaxy use in large social network analysis and other case stud-",
"title": ""
},
{
"docid": "67ca2df3c7d660600298e517020fe974",
"text": "The recent trend to design more efficient and versatile ships has increased the variety in hybrid propulsion and power supply architectures. In order to improve performance with these architectures, intelligent control strategies are required, while mostly conventional control strategies are applied currently. First, this paper classifies ship propulsion topologies into mechanical, electrical and hybrid propulsion, and power supply topologies into combustion, electrochemical, stored and hybrid power supply. Then, we review developments in propulsion and power supply systems and their control strategies, to subsequently discuss opportunities and challenges for these systems and the associated control. We conclude that hybrid architectures with advanced control strategies can reduce fuel consumption and emissions up to 10–35%, while improving noise, maintainability, manoeuvrability and comfort. Subsequently, the paper summarises the benefits and drawbacks, and trends in application of propulsion and power supply technologies, and it reviews the applicability and benefits of promising advanced control strategies. Finally, the paper analyses which control strategies can improve performance of hybrid systems for future smart and autonomous ships and concludes that a combination of torque, angle of attack, and Model Predictive Control with dynamic settings could improve performance of future smart and more",
"title": ""
},
{
"docid": "091a37c8e07520154e3305bb79427f76",
"text": "Document classification presents difficult challenges due to the sparsity and the high dimensionality of text data, and to the complex semantics of the natural language. The traditional document representation is a word-based vector (Bag of Words, or BOW), where each dimension is associated with a term of the dictionary containing all the words that appear in the corpus. Although simple and commonly used, this representation has several limitations. It is essential to embed semantic information and conceptual patterns in order to enhance the prediction capabilities of classification algorithms. In this paper, we overcome the shortages of the BOW approach by embedding background knowledge derived from Wikipedia into a semantic kernel, which is then used to enrich the representation of documents. Our empirical evaluation with real data sets demonstrates that our approach successfully achieves improved classification accuracy with respect to the BOW technique, and to other recently developed methods.",
"title": ""
}
] |
scidocsrr
|
c13f99f81a8b967362b443f8a5e3f080
|
Analysis of Datamining Technique for Traffic Accident Severity Problem : A Review
|
[
{
"docid": "f7c7e00e3a2b07cd5845b26d6522d16e",
"text": "This work employed Artificial Neural Networks and Decision Trees data analysis techniques to discover new knowledge from historical data about accidents in one of Nigeria’s busiest roads in order to reduce carnage on our highways. Data of accidents records on the first 40 kilometres from Ibadan to Lagos were collected from Nigeria Road Safety Corps. The data were organized into continuous and categorical data. The continuous data were analysed using Artificial Neural Networks technique and the categorical data were also analysed using Decision Trees technique .Sensitivity analysis was performed and irrelevant inputs were eliminated. The performance measures used to determine the performance of the techniques include Mean Absolute Error (MAE), Confusion Matrix, Accuracy Rate, True Positive, False Positive and Percentage correctly classified instances. Experimental results reveal that, between the machines learning paradigms considered, Decision Tree approach outperformed the Artificial Neural Network with a lower error rate and higher accuracy rate. Our research analysis also shows that, the three most important causes of accident are Tyre burst, loss of control and over speeding.",
"title": ""
},
{
"docid": "ca600a96bd895537af760efccb30c776",
"text": "This paper emphasizes the importance of Data Mining classification algorithms in predicting the vehicle collision patterns occurred in training accident data set. This paper is aimed at deriving classification rules which can be used for the prediction of manner of collision. The classification algorithms viz. C4.5, C-RT, CS-MC4, Decision List, ID3, Naïve Bayes and RndTree have been applied in predicting vehicle collision patterns. The road accident training data set obtained from the Fatality Analysis Reporting System (FARS) which is available in the University of Alabama’s Critical Analysis Reporting Environment (CARE) system. The experimental results indicate that RndTree classification algorithm achieved better accuracy than other algorithms in classifying the manner of collision which increases fatality rate in road accidents. Also the feature selection algorithms including CFS, FCBF, Feature Ranking, MIFS and MODTree have been explored to improve the classifier accuracy. The result shows that the Feature Ranking method significantly improved the accuracy of the classifiers.",
"title": ""
},
{
"docid": "be3fa2fbaaa362aace36d112ff09f94d",
"text": "One of the key objectives in accident data analysis to identify the main factors associated with a road and traffic accident. However, heterogeneous nature of road accident data makes the analysis task difficult. Data segmentation has been used widely to overcome this heterogeneity of the accident data. In this paper, we proposed a framework that used K-modes clustering technique as a preliminary task for segmentation of 11,574 road accidents on road network of Dehradun (India) between 2009 and 2014 (both included). Next, association rule mining are used to identify the various circumstances that are associated with the occurrence of an accident for both the entire data set (EDS) and the clusters identified by K-modes clustering algorithm. The findings of cluster based analysis and entire data set analysis are then compared. The results reveal that the combination of k mode clustering and association rule mining is very inspiring as it produces important information that would remain hidden if no segmentation has been performed prior to generate association rules. Further a trend analysis have also been performed for each clusters and EDS accidents which finds different trends in different cluster whereas a positive trend is shown by EDS. Trend analysis also shows that prior segmentation of accident data is very important before analysis.",
"title": ""
},
{
"docid": "310f13dac8d7cf2d1b40878ef6ce051b",
"text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.",
"title": ""
},
{
"docid": "8a72fa8d8166bccbb71272b360586f66",
"text": "Traffic accident data are often heterogeneous, which can cause certain relationships to remain hidden. Therefore, traffic accident analysis is often performed on a small subset of traffic accidents or several models are built for various traffic accident types. In this paper, we examine the effectiveness of a clustering technique, i.e. latent class clustering, for identifying homogenous traffic accident types. Firstly, a heterogeneous traffic accident data set is segmented into seven clusters, which are translated into seven traffic accident types. Secondly, injury analysis is performed for each cluster. The results of these cluster-based analyses are compared with the results of a full-data analysis. This shows that applying latent class clustering as a preliminary analysis can reveal hidden relationships and can help the domain expert or traffic safety researcher to segment traffic accidents.",
"title": ""
},
{
"docid": "70bad179fc75181c045362bb9ee3f4dc",
"text": "Engineers and researchers in the automobile industry have tried to design and build safer automobiles, but traffic accidents are unavoidable. Patterns involved in dangerous crashes could be detected if we develop accurate prediction models capable of automatic classification of type of injury severity of various traffic accidents. These behavioral and roadway accident patterns can be useful to develop traffic safety control policies. We believe that to obtain the greatest possible accident reduction effects with limited budgetary resources, it is important that measures be based on scientific and objective surveys of the causes of accidents and severity of injuries. This paper summarizes the performance of four machine learning paradigms applied to modeling the severity of injury that occurred during traffic accidents. We considered neural networks trained using hybrid learning approaches, support vector machines, decision trees and a concurrent hybrid model involving decision trees and neural networks. Experiment results reveal that among the machine learning paradigms considered the hybrid decision tree-neural network approach outperformed the individual approaches.",
"title": ""
}
] |
[
{
"docid": "6b622da925ead8c237518ab21fa3e85d",
"text": "Helpless children attribute their failures to lack of ability and view them as insurmountable. Mastery-oriented children, in contrast, tend to emphasize motivational factors and to view failure as surmountable. Although the performance of the two groups is usually identical during success of prior to failure, past research suggests that these groups may well differ in the degree to which they perceive that their successes are replicable and hence that their failures are avoidable. The present study was concerned with the nature of such differences. Children performed a task on which they encountered success and then failure. Half were asked a series of questions about their performance after success and half after failure. Striking differences emerged: Compared to mastery-oriented children, helpless children underestimated the number of success (and overestimated the number of failures), did not view successes as indicative of ability, and did not expect the successes to continue. subsequent failure led them to devalue ;their performance but left the mastery-oriented children undaunted. Thus, for helpless children, successes are less salient, less predictive, and less enduring--less successful.",
"title": ""
},
{
"docid": "05df343534f207aed5500018ce137c71",
"text": "Preface All around the world, every year, scientists and practitioners discuss at numerous conferences dedicated to Artificial Intelligence the novel achievements and challenges. Such conferences are organized also in Poland, one of them is the International Symposium Advances in Artificial Intelligence and Applications (AAIA), organized annually within the International Multiconference on Computer Science and Information Technology framework. Selected participants of that Symposium have been invited to submit their papers to this issue of Fundamenta Informaticae. Two papers have been written especially to this issue of Fundamenta journal, three other were presented at the Symposium and are considerably extended. The first paper, entitled \" Measuring Semantic Closeness of Ontologically Demarcated Resources \" is written by two teams: from Poland and Korea. Fusion of their experiences has allowed to develop an agent-based system, aimed to support workers in an organization. One of key functionalities of this system is ontological matchmaking, understood as a way of establishing closeness between resources. The system recommends which, among available resources, are relevant / of interest to the worker. Authors approach to measuring semantic closeness between ontologically demarcated information objects is discussed in the paper. A Duty Trip Support application is used as a case study. It is worth mentioning that, in computer science, ontology is a representation of a set of concepts and relationships between them within a given domain. The second paper, \" Designing Model Based Classifiers by Emphasizing Soft Targets \" , deals with training classifiers. Classification task is very popular in real-life problems. A number of different classification methods exists, each reveals some advantages and weakness. The authors explore the effectiveness of using emphasized soft targets with generative models, such as Gaussian Mixture Models (GMM), and Gaussian Processes (GP). Their approach seems to produce better performance and is less sensitive for parameters values. The third paper, Improved Resulted Word Counts Optimizer for Automatic Image Annotation Problem , uses classifiers (a family of classifiers) for automatic annotation of images. It is an important research topic in pattern recognition area. In the paper, a generic approach to find correct word frequencies is proposed. The Optimizer can be used with different automatic image annotators, based on various machine learning paradigms. In the paper, a new, improved authors former method, Greedy Resulted Word Counts Optimization is proposed. The proposed method is more intuitive and it reduces",
"title": ""
},
{
"docid": "72e1c5690f20c47a63ebbb1dd3fc7f2c",
"text": "In edge-cloud computing, a set of edge servers are deployed near the mobile devices such that these devices can offload jobs to the servers with low latency. One fundamental and critical problem in edge-cloud systems is how to dispatch and schedule the jobs so that the job response time (defined as the interval between the release of a job and the arrival of the computation result at its device) is minimized. In this paper, we propose a general model for this problem, where the jobs are generated in arbitrary order and times at the mobile devices and offloaded to servers with both upload and download delays. Our goal is to minimize the total weighted response time over all the jobs. The weight is set based on how latency sensitive the job is. We derive the first online job dispatching and scheduling algorithm in edge-clouds, called OnDisc, which is scalable in the speed augmentation model; that is, OnDisc is (1 + ε)-speed O(1/ε)-competitive for any constant ε ∊ (0,1). Moreover, OnDisc can be easily implemented in distributed systems. Extensive simulations on a real-world data-trace from Google show that OnDisc can reduce the total weighted response time dramatically compared with heuristic algorithms.",
"title": ""
},
{
"docid": "d8536cd772437753b3b9e972ae5653f3",
"text": "Modeling students’ knowledge is a fundamental part of intelligent tutoring systems. One of the most popular methods for estimating students’ knowledge is Corbett and Anderson’s [6] Bayesian Knowledge Tracing model. The model uses four parameters per skill, fit using student performance data, to relate performance to learning. Beck [1] showed that existing methods for determining these parameters are prone to the Identifiability Problem: the same performance data can be fit equally well by different parameters, with different implications on system behavior. Beck offered a solution based on Dirichlet Priors [1], but, we show this solution is vulnerable to a different problem, Model Degeneracy, where parameter values violate the model’s conceptual meaning (such as a student being more likely to get a correct answer if he/she does not know a skill than if he/she does). We offer a new method for instantiating Bayesian Knowledge Tracing, using machine learning to make contextual estimations of the probability that a student has guessed or slipped. This method is no more prone to problems with Identifiability than Beck’s solution, has less Model Degeneracy than competing approaches, and fits student performance data better than prior methods. Thus, it allows for more accurate and reliable student modeling in ITSs that use knowledge tracing.",
"title": ""
},
{
"docid": "511342f43f7b5f546e72e8651ae4e313",
"text": "With the introduction of the Microsoft Kinect for Windows v2 (Kinect v2), an exciting new sensor is available to robotics and computer vision researchers. Similar to the original Kinect, the sensor is capable of acquiring accurate depth images at high rates. This is useful for robot navigation as dense and robust maps of the environment can be created. Opposed to the original Kinect working with the structured light technology, the Kinect v2 is based on the time-of-flight measurement principle and might also be used outdoors in sunlight. In this paper, we evaluate the application of the Kinect v2 depth sensor for mobile robot navigation. The results of calibrating the intrinsic camera parameters are presented and the minimal range of the depth sensor is examined. We analyze the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, we introduce empirically derived noise models for the Kinect v2 sensor in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the Kinect v2 depth images for a variety of applications.",
"title": ""
},
{
"docid": "a973ed3011d9c07ddab4c15ef82fe408",
"text": "OBJECTIVES\nTo assess the efficacy of a 6-week interdisciplinary treatment that combines coordinated psychological, medical, educational, and physiotherapeutic components (PSYMEPHY) over time compared to standard pharmacologic care.\n\n\nMETHODS\nRandomised controlled trial with follow-up at 6 months for the PSYMEPHY and control groups and 12 months for the PSYMEPHY group. Participants were 153 outpatients with FM recruited from a hospital pain management unit. Patients randomly allocated to the control group (CG) received standard pharmacologic therapy. The experimental group (EG) received an interdisciplinary treatment (12 sessions). The main outcome was changes in quality of life, and secondary outcomes were pain, physical function, anxiety, depression, use of pain coping strategies, and satisfaction with treatment as measured by the Fibromyalgia Impact Questionnaire, the Hospital Anxiety and Depression Scale, the Coping with Chronic Pain Questionnaire, and a question regarding satisfaction with the treatment.\n\n\nRESULTS\nSix months after the intervention, significant improvements in quality of life (p=0.04), physical function (p=0.01), and pain (p=0.03) were seen in the PSYMEPHY group (n=54) compared with controls (n=56). Patients receiving the intervention reported greater satisfaction with treatment. Twelve months after the intervention, patients in the PSYMEPHY group (n=58) maintained statistically significant improvements in quality of life, physical functioning, pain, and symptoms of anxiety and depression, and were less likely to use maladaptive passive coping strategies compared to baseline.\n\n\nCONCLUSIONS\nAn interdisciplinary treatment for FM was associated with improvements in quality of life, pain, physical function, anxiety and depression, and pain coping strategies up to 12 months after the intervention.",
"title": ""
},
{
"docid": "7af557e5fb3d217458d7b635ee18fee0",
"text": "The growth of mobile commerce, or the purchase of services or goods using mobile technology, heavily depends on the availability, reliability, and acceptance of mobile wallet systems. Although several researchers have attempted to create models on the acceptance of such mobile payment systems, no single comprehensive framework has yet emerged. Based upon a broad literature review of mobile technology adoption, a comprehensive model integrating eleven key consumer-related variables affecting the adoption of mobile payment systems is proposed. This model, based on established theoretical underpinnings originally established in the technology acceptance literature, extends existing frameworks by including attractiveness of alternatives and by proposing relationships between the key constructs. Japan is at the forefront of such technology and a number of domestic companies have been effectively developing and marketing mobile wallets for some time. Using this proposed framework, we present the case of the successful adoption of Mobile Suica in Japan, which can serve as a model for the rapid diffusion of such payment systems for other countries where adoption has been unexpectedly slow.",
"title": ""
},
{
"docid": "2e17c3a27d381728be7868aaf2a86281",
"text": "With the proliferation of automobile industry, vehicles are augmented with various forms of increasingly powerful computation, communication, storage and sensing resources. A vehicle therefore can be regarded as “computer-on-wheels”. With such rich resources, it is of great significance to efficiently utilize these resources. This puts forward the vision of vehicular cloud computing. In this paper, we provide an extensive survey of current vehicular cloud computing research and highlight several key issues of vehicular cloud such as architecture, inherent features, service taxonomy and potential applications.",
"title": ""
},
{
"docid": "1adacc7dc452e27024756c36eecb8cae",
"text": "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "5221a4982626902388540ba95f5a57c3",
"text": "In this chapter, event-based control approaches for microalgae culture in industrial reactors are evaluated. Those control systems are applied to regulate the microalgae culture growth conditions such as pH and dissolved oxygen concentration. The analyzed event-based control systems deal with sensor and actuator deadbands approaches in order to provide the desired properties of the controller. Additionally, a selective event-based scheme is evaluated for simultaneous control of pH and dissolved oxygen. In such configurations, the event-based approach provides the possibility to adapt the control system actions to the dynamic state of the controlled bioprocess. In such a way, the event-based control algorithm allows to establish a tradeoff between control performance and number of process update actions. This fact can be directly related with reduction of CO2 injection times, what is also reflected in CO2 losses. The application of selective event-based scheme allows the improved biomass productivity, since the controlled variables are kept within the limits for an optimal photosynthesis rate. Moreover, such a control scheme allows effective CO2 utilization and aeration system energy minimization. The analyzed control system configurations are evaluated for both tubular and raceway photobioreactors to proove its viability for different reactor configurations as well as control system objectives. Additionally, control performance indexes have been used to show the efficiency of the event-based control approaches. The obtained results demonA. Pawlowski (✉) ⋅ S. Dormido Department of Computer Science and Automatic Control, UNED, Madrid, Spain e-mail: a.pawlowski@dia.uned.es S. Dormido e-mail: sdormido@dia.uned.es J.L. Guzmán ⋅ M. Berenguel Department of Informatics, University of Almería, ceiA3, CIESOL, Almería, Spain e-mail: joseluis.guzman@ual.es",
"title": ""
},
{
"docid": "4b423a2b51a1f84c23c53b2f7fb7094a",
"text": "Convolutional neural networks have achieved great success in various vision tasks; however, they incur heavy resource costs. By using deeper and wider networks, network accuracy can be improved rapidly. However, in an environment with limited resources (e.g., mobile applications), heavy networks may not be usable. This study shows that naive convolution can be deconstructed into a shift operation and pointwise convolution. To cope with various convolutions, we propose a new shift operation called active shift layer (ASL) that formulates the amount of shift as a learnable function with shift parameters. This new layer can be optimized end-to-end through backpropagation and it can provide optimal shift values. Finally, we apply this layer to a light and fast network that surpasses existing state-of-the-art networks. Code is available at https://github.com/jyh2986/Active-Shift.",
"title": ""
},
{
"docid": "5bc50fca713538ca27b3779078cc72b2",
"text": "Word embedding aims to learn a continuous representation for each word. It attracts increasing attention due to its effectiveness in various tasks such as named entity recognition and language modeling. Most existing word embedding results are generally trained on one individual data source such as news pages or Wikipedia articles. However, when we apply them to other tasks such as web search, the performance suffers. To obtain a robust word embedding for different applications, multiple data sources could be leveraged. In this paper, we proposed a two-side multimodal neural network to learn a robust word embedding from multiple data sources including free text, user search queries and search click-through data. This framework takes the word embeddings learned from different data sources as pre-train, and then uses a two-side neural network to unify these embeddings. The pre-trained embeddings are obtained by adapting the recently proposed CBOW algorithm. Since the proposed neural network does not need to re-train word embeddings for a new task, it is highly scalable in real world problem solving. Besides, the network allows weighting different sources differently when applied to different application tasks. Experiments on two real-world applications including web search ranking and word similarity measuring show that our neural network with multiple sources outperforms state-of-the-art word embedding algorithm with each individual source. It also outperforms other competitive baselines using multiple sources.",
"title": ""
},
{
"docid": "4292b58e261d92aa113458c15590e293",
"text": "Data of different modalities generally convey complimentary but heterogeneous information, and a more discriminative representation is often preferred by combining multiple data modalities like the RGB and infrared features. However in reality, obtaining both data channels is challenging due to many limitations. For example, the RGB surveillance cameras are often restricted from private spaces, which is in conflict with the need of abnormal activity detection for personal security. As a result, using partial data channels to build a full representation of multi-modalities is clearly desired. In this paper, we propose a novel Partial-modal Generative Adversarial Networks (PM-GANs) that learns a full-modal representation using data from only partial modalities. The full representation is achieved by a generated representation in place of the missing data channel. Extensive experiments are conducted to verify the performance of our proposed method on action recognition, compared with four state-of-the-art methods. Meanwhile, a new InfraredVisible Dataset for action recognition is introduced, and will be the first publicly available action dataset that contains paired infrared and visible spectrum.",
"title": ""
},
{
"docid": "19067b3d0f951bad90c80688371532fc",
"text": "Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. Though the enhancements in AI are making life easier for human beings day by day, there is constant fear that AI based systems will pose a threat to humanity. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture , we discuss the road map ahead",
"title": ""
},
{
"docid": "a82dba8f935b746b9ca98133a0a92739",
"text": "We study a symmetric collaborative dialogue setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.",
"title": ""
},
{
"docid": "422adf480622a0b6011c8d0941767ba9",
"text": "The paper presents a method for the calculus of the currents in the elementary conductors and the additional winding losses for high power a.c. machines. The accuracy method estimation and the results for a hydro-generator of 216 MW validate the proposed method for the design of the Roebel bars.",
"title": ""
},
{
"docid": "988ee751c3d7ed4019ddabe61aafec00",
"text": "Dynamic neural network (DNN) models provide an excellent means for forecasting and prediction of nonstationary time series. A neural network architecture, known as locally recurrent neural network ((LRNN) [71], is preferred to the traditional multilayer perceptron (MLP) because the time varying nature of a stock time series can be better represented using LRNN. The use of LRNN has demonstrated superiority in comparison to the widely used neural networks like the multilayered perceptron (MLP) network, radial basis function (RBF) neural network, and wavelet neural networks (WNN), etc. in predicting highly fluctuating time series databases like electrical load, electricity price, and financial markets.",
"title": ""
},
{
"docid": "649beae67217a5e4783e6151bac855f9",
"text": "Airfield ground lighting (AGL) systems are responsible for providing visual reference to aircrafts in the airport neighborhood. In an AGL system, a large number of lamps are organized in serial circuits and connected to current regulators that supply energy to the lamps. Controlling and monitoring lamps (including detection and location of burnt-out lamps) are critical for cost-saving maintenance and operation of AGL systems. Power-line Communications (PLC) is an attractive technology to connect elements of the AGL, reusing the power distribution cable as a transmission medium. PLC technologies avoid the installation of new wires throughout the airport infrastructure. This paper proposes a new model for power-line communication links in AGL systems. Every element (isolation transformer, primary circuit cable, and lamps) has been analyzed in laboratory and modeled using SPICE. The resulting models have been integrated to build a complete power-line link model. Simulation results are compared to experimental results obtained in real conditions in the Airport of Seville (Spain).",
"title": ""
},
{
"docid": "3b2cbc85f5fb17aba8a872c12ba4928a",
"text": "For over five decades, liquid injectable silicone has been used for soft-tissue augmentation. Its use has engendered polarized reactions from the public and from physicians. Adherents of this product tout its inert chemical structure, ease of use, and low cost. Opponents of silicone cite the many reports of complications, including granulomas, pneumonitis, and disfiguring nodules that are usually the result of large-volume injection and/or industrial grade or adulterated material. Unfortunately, as recently as 2006, reports in The New England Journal of Medicine and The New York Times failed to distinguish between the use of medical grade silicone injected by physicians trained in the microdroplet technique and the use of large volumes of industrial grade products injected by unlicensed or unskilled practitioners. This review separates these two markedly different procedures. In addition, it provides an overview of the chemical structure of liquid injectable silicone, the immunology of silicone reactions within the body, treatment for cosmetic improvement including human immunodeficiency virus lipoatrophy, technical considerations for its injection, complications seen following injections, and some considerations of the future for silicone soft-tissue augmentation.",
"title": ""
},
{
"docid": "109838175d109002e022115d84cae0fa",
"text": "We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).",
"title": ""
}
] |
scidocsrr
|
eabec6ed4f7bf27d2f685e5ea11b0fdc
|
Stochastic Alternating Direction Method of Multipliers
|
[
{
"docid": "2d34d9e9c33626727734766a9951a161",
"text": "In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the `1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an `1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the `1-norm fidelity should be the fidelity of choice in compressive sensing.",
"title": ""
},
{
"docid": "3ee39231fc2fbf3b6295b1b105a33c05",
"text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.",
"title": ""
},
{
"docid": "1014a33211c9ca3448fa02cf734a5775",
"text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.",
"title": ""
}
] |
[
{
"docid": "4124a456822b84ab9d02148179a874ca",
"text": "Successful endurance training involves the manipulation of training intensity, duration, and frequency, with the implicit goals of maximizing performance, minimizing risk of negative training outcomes, and timing peak fitness and performances to be achieved when they matter most. Numerous descriptive studies of the training characteristics of nationally or internationally competitive endurance athletes training 10 to 13 times per week seem to converge on a typical intensity distribution in which about 80% of training sessions are performed at low intensity (2 mM blood lactate), with about 20% dominated by periods of high-intensity work, such as interval training at approx. 90% VO2max. Endurance athletes appear to self-organize toward a high-volume training approach with careful application of high-intensity training incorporated throughout the training cycle. Training intensification studies performed on already well-trained athletes do not provide any convincing evidence that a greater emphasis on high-intensity interval training in this highly trained athlete population gives long-term performance gains. The predominance of low-intensity, long-duration training, in combination with fewer, highly intensive bouts may be complementary in terms of optimizing adaptive signaling and technical mastery at an acceptable level of stress.",
"title": ""
},
{
"docid": "57d1648391cac4ccfefd85aacef6b5ba",
"text": "Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.",
"title": ""
},
{
"docid": "bb95c0246cbd1238ad4759f488763c37",
"text": "The massive scale of future wireless networks will cause computational bottlenecks in performance optimization. In this paper, we study the problem of connecting mobile traffic to Cloud RAN (C-RAN) stations. To balance station load, we steer the traffic by designing device association rules. The baseline association rule connects each device to the station with the strongest signal, which does not account for interference or traffic hot spots, and leads to load imbalances and performance deterioration. Instead, we can formulate an optimization problem to decide centrally the best association rule at each time instance. However, in practice this optimization has such high dimensions, that even linear programming solvers fail to solve. To address the challenge of massive connectivity, we propose an approach based on the theory of optimal transport, which studies the economical transfer of probability between two distributions. Our proposed methodology can further inspire scalable algorithms for massive optimization problems in wireless networks.",
"title": ""
},
{
"docid": "0b1baa3190abb39284f33b8e73bcad1d",
"text": "Despite significant advances in machine learning and perception over the past few decades, perception algorithms can still be unreliable when deployed in challenging time-varying environments. When these systems are used for autonomous decision-making, such as in self-driving vehicles, the impact of their mistakes can be catastrophic. As such, it is important to characterize the performance of the system and predict when and where it may fail in order to take appropriate action. While similar in spirit to the idea of introspection, this work introduces a new paradigm for predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose two models that probabilistically predict perception performance from observations gathered over time. While both approaches are place-specific, the second approach additionally considers appearance similarity when incorporating past observations. We evaluate our method in a classical decision-making scenario in which the robot must choose when and where to drive autonomously in 60 km of driving data from an urban environment. Results demonstrate that both approaches lead to fewer false decisions (in terms of incorrectly offering or denying autonomy) for two different detector models, and show that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions.",
"title": ""
},
{
"docid": "a97f71e0d5501add1ae08eeee5378045",
"text": "Machine learning is being implemented in bioinformatics and computational biology to solve challenging problems emerged in the analysis and modeling of biological data such as DNA, RNA, and protein. The major problems in classifying protein sequences into existing families/superfamilies are the following: the selection of a suitable sequence encoding method, the extraction of an optimized subset of features that possesses significant discriminatory information, and the adaptation of an appropriate learning algorithm that classifies protein sequences with higher classification accuracy. The accurate classification of protein sequence would be helpful in determining the structure and function of novel protein sequences. In this article, we have proposed a distance-based sequence encoding algorithm that captures the sequence’s statistical characteristics along with amino acids sequence order information. A statistical metric-based feature selection algorithm is then adopted to identify the reduced set of features to represent the original feature space. The performance of the proposed technique is validated using some of the best performing classifiers implemented previously for protein sequence classification. An average classification accuracy of 92% was achieved on the yeast protein sequence data set downloaded from the benchmark UniProtKB database.",
"title": ""
},
{
"docid": "31377be583495d707e6e7c3108db1c44",
"text": "Recent advancements in model-based reinforcement learning have shown that the dynamics of many structured domains (e.g. DBNs) can be learned with tractable sample complexity, despite their exponentially large state spaces. U nfortunately, these algorithms all require access to a plann er that computes a near optimal policy, and while many traditional MDP algorithms make this guarantee, their computation time grows with the number of states. We show how to replace these over-matched planners with a class of sample-based planners—whose computation time is independent of the number of states—without sacrificing the sampleefficiency guarantees of the overall learning algorithms. T o do so, we define sufficient criteria for a sample-based planne r to be used in such a learning system and analyze two popular sample-based approaches from the literature. We also in troduce our own sample-based planner, which combines the strategies from these algorithms and still meets the criter ia fo integration into our learning system. In doing so, we define the first complete RL solution for compactly represented (ex ponentially sized) state spaces with efficiently learnable dynamics that is both sample efficient and whose computation time does not grow rapidly with the number of states.",
"title": ""
},
{
"docid": "4003b1a03be323c78e98650895967a07",
"text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.",
"title": ""
},
{
"docid": "30f7c423ac49cfcd19a46b487d660c9d",
"text": "This letter presents two different waveguide-to-microstrip transition designs for the 76-81 GHz frequency band. Both transitions are fabricated on a grounded single layer substrate using a standard printed circuit board (PCB) fabrication process. A coplanar patch antenna and a feed technique at the non-radiating edge are used for the impedance transformation. In the first design, a conventional WR-10 waveguide is connected. In the second design, a WR-10 waveguide flange with an additional inductive waveguide iris is employed to improve the bandwidth. Both designs were developed for the integration of multi-channel array systems allowing an element spacing of λ0/2 or less. Measurement results of the first transition without the iris show a bandwidth of 8.5 GHz (11%) for 10 dB return loss and a minimum insertion loss (IL) of 0.35 dB. The transition using the iris increases the bandwidth to 12 GHz (15%) for 10 dB return loss and shows a minimum insertion loss of 0.6 dB at 77 GHz.",
"title": ""
},
{
"docid": "207e90cebdf23fb37f10b5ed690cb4fc",
"text": "In the scientific digital libraries, some papers from different research communities can be described by community-dependent keywords even if they share a semantically similar topic. Articles that are not tagged with enough keyword variations are poorly indexed in any information retrieval system which limits potentially fruitful exchanges between scientific disciplines. In this paper, we introduce a novel experimentally designed pipeline for multi-label semantic-based tagging developed for open-access metadata digital libraries. The approach starts by learning from a standard scientific categorization and a sample of topic tagged articles to find semantically relevant articles and enrich its metadata accordingly. Our proposed pipeline aims to enable researchers reaching articles from various disciplines that tend to use different terminologies. It allows retrieving semantically relevant articles given a limited known variation of search terms. In addition to achieving an accuracy that is higher than an expanded query based method using a topic synonym set extracted from a semantic network, our experiments also show a higher computational scalability versus other comparable techniques. We created a new benchmark extracted from the open-access metadata of a scientific digital library and published it along with the experiment code to allow further research in the topic.",
"title": ""
},
{
"docid": "84cab807959d75cc8cc7295b6facd657",
"text": "The driven-right-leg circuit is often used with biopotential differential amplifiers to reduce common mode voltage. We analyze this circuit and show that high loop gains can cause instability. We present equations that can be used to design circuits that minimize common mode voltage without instability. We also show that it is important to consider the reduction of high-frequency interference from fluorescent lights when determining the bandwidth of the drivenright-leg circuit.",
"title": ""
},
{
"docid": "67e06feae2a593017596ab238f9e096e",
"text": "ABSTRACT\nThis paper presents a survey on methods that use digital image processing techniques to detect, quantify and classify plant diseases from digital images in the visible spectrum. Although disease symptoms can manifest in any part of the plant, only methods that explore visible symptoms in leaves and stems were considered. This was done for two main reasons: to limit the length of the paper and because methods dealing with roots, seeds and fruits have some peculiarities that would warrant a specific survey. The selected proposals are divided into three classes according to their objective: detection, severity quantification, and classification. Each of those classes, in turn, are subdivided according to the main technical solution used in the algorithm. This paper is expected to be useful to researchers working both on vegetable pathology and pattern recognition, providing a comprehensive and accessible overview of this important field of research.",
"title": ""
},
{
"docid": "c24bfd3b7bbc8222f253b004b522f7d5",
"text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).",
"title": ""
},
{
"docid": "d2ce66a758efcb045e42e8accd7ba292",
"text": "Incorporating a human computer interaction (HCI) perspective into the systems development life cycle (SDLC) is critical to information systems (IS) success and in turn to the success of businesses. However, modern SDLC models are based more on organizational needs than human needs. The human interaction aspect of an information system is considered far too little (only the screen interface) and far too late in the IS development process (only at the design stage). Thus there is often a gap between satisfying organizational needs and supporting and enriching human users as they use the system for their tasks. This problem can be fixed by carefully integrating HCI development into the SDLC process to achieve a truly human-centered IS development approach. This tutorial presents a methodology for such human-centered IS development where human requirements for the whole system are emphasized. An example of applying such methodology is used to illustrate the important concepts and techniques.",
"title": ""
},
{
"docid": "74f60a944f9a208ae69b4d8e3b10e864",
"text": "We study assortment optimization problems where customer choices are governed by the nested logit model and there are constraints on the set of products offered in each nest. Under the nested logit model, the products are organized in nests. Each product in each nest has a fixed revenue associated with it. The goal is to find a feasible set of products, i.e. a feasible assortment, to maximize the expected revenue per customer. We consider cardinality and space constraints on the offered assortment, which respectively limit the number of products and the total space consumption of the products offered in each nest. We show that the optimal assortment under cardinality constraints can be obtained efficiently by solving a linear program. The assortment optimization problem under space constraints is NP-hard. We show how to obtain an assortment with a performance guarantee of two under space constraints. This assortment also provides a performance guarantee of 1/(1 − ε) when the space requirement of each product is at most a fraction ε of the space availability in each nest. Building on our results for constrained assortment optimization, we show that we can efficiently solve joint assortment optimization and pricing problems under the nested logit model, where we choose the assortment of products to offer to customers, as well as the prices of the offered products. Discrete choice models have long been used to describe how customers choose among a set of products that differ in attributes such as price and quality. Specifically, discrete choice models represent the demand for a particular product through the attributes of all products that are in the offered assortment, capturing substitution possibilities and complementary relationships between the products. To pursue this thought, different discrete choice models have been proposed in the literature. Some of these models are based on axioms as in Luce (1959), resulting in the basic attraction model, whereas some others are based on random utility theory as in McFadden (1974), resulting in the multinomial logit model. A popular extension to the multinomial logit model is the nested logit model introduced by Williams (1977). Under the nested logit model, the products are organized in nests. The choice process of a customer proceeds in such a way that the customer first selects a nest, and then a product within the selected nest. In this paper, we study constrained assortment optimization problems when customers choose according to the nested logit model. There is a fixed revenue contribution associated with each product. The goal is to find an assortment of products to offer so as to maximize the expected revenue per customer subject to a constraint on the assortment offered in each nest. We consider two types of constraints, which we refer to as cardinality and space constraints. Cardinality constraints limit the number of products in the assortment offered in each nest. We show that the optimal assortment under cardinality constraints can be obtained by solving a linear program. Under space constraints, each product occupies a certain amount of space and we limit the total space consumption of the products offered in each nest. The assortment optimization problem under space constraints is NP-hard, but we show that we can solve a tractable linear program to obtain an assortment with a certain performance guarantee. These results establish that we can obtain provably good assortments under cardinality or space constraints. In addition, we consider joint assortment optimization and pricing problems under the nested logit model. In the joint assortment optimization and pricing problem, the goal is to decide which assortment of products to offer and set the prices of the offered products. Customers choose among the offered products according to the nested logit model and the price of a product affects its attractiveness in the sense that if we set the price of a product higher, then it becomes less attractive to customers. Building on our results for constrained assortment optimization problems, we show that an optimal solution to the joint assortment optimization and pricing problem can be obtained efficiently by solving a linear program. Therefore, our results are not only useful for solving constrained assortment optimization problems, but they are also useful for pricing. Main Contributions. In assortment optimization problems, we consider a setting with m nests, each including n products that we can offer to customers. Under cardinality constraints, we show that we can solve a linear program with 1+m decision variables and O(mn2) constraints to obtain the optimal assortment. Under space constraints, we show that we can solve a linear program of the same size to obtain an assortment whose expected revenue deviates from the optimal expected revenue by at most a factor of two. Also, if each product consumes at most a fraction ε ∈ [0, 1)",
"title": ""
},
{
"docid": "25bddb3111da2485c341eec1d7fdf7c0",
"text": "Security protocols are building blocks in secure communications. Security protocols deploy some security mechanisms to provide certain security services. Security protocols are considered abstract when analyzed. They might involve more vulnerabilities when implemented. This manuscript provides a holistic study on security protocols. It reviews foundations of security protocols, taxonomy of attacks on security protocols and their implementations, and different methods and models for security analysis of protocols. Specifically, it clarifies differences between information-theoretic and computational security, and computational and symbolic models. Furthermore, a survey on computational security models for authenticated key exchange (AKE) and passwordauthenticated key exchange (PAKE) protocols, as the most important and well-studied type of security protocols, is provided.",
"title": ""
},
{
"docid": "2b7465ad660dadd040bd04839d3860f3",
"text": "Simulation of a pen-and-ink illustration style in a realtime rendering system is a challenging computer graphics problem. Tonal art maps (TAMs) were recently suggested as a solution to this problem. Unfortunately, only the hatching aspect of pen-and-ink media was addressed thus far. We extend the TAM approach and enable representation of arbitrary textures. We generate TAM images by distributing stroke primitives according to a probability density function. This function is derived from the input image and varies depending on the TAM’s scale and tone levels. The resulting depiction of textures approximates various styles of pen-and-ink illustrations such as outlining, stippling, and hatching.",
"title": ""
},
{
"docid": "3f1967d87d14a1ee652760929ed217d0",
"text": "Existing location-based social networks (LBSNs), e.g. Foursquare, depend mainly on GPS or network-based localization to infer users' locations. However, GPS is unavailable indoors and network-based localization provides coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor environments, where people spend 89% of their time. This in turn affects the user experience, in terms of the accuracy of the ranked list of venues, especially for the small-screens of mobile devices; misses business opportunities; and leads to reduced venues coverage.\n In this paper, we present CheckInside: a system that can provide a fine-grained indoor location-based social network. CheckInside leverages the crowd-sensed data collected from users' mobile devices during the check-in operation and knowledge extracted from current LBSNs to associate a place with its name and semantic fingerprint. This semantic fingerprint is used to obtain a more accurate list of nearby places as well as automatically detect new places with similar signatures. A novel algorithm for handling incorrect check-ins and inferring a semantically-enriched floorplan is proposed as well as an algorithm for enhancing the system performance based on the user implicit feedback.\n Evaluation of CheckInside in four malls over the course of six weeks with 20 participants shows that it can provide the actual user location within the top five venues 99% of the time. This is compared to 17% only in the case of current LBSNs. In addition, it can increase the coverage of current LBSNs by more than 25%.",
"title": ""
},
{
"docid": "95c535a587344fd0efbd5d9d299b1b98",
"text": "We propose a method to integrate feature extraction and prediction as a single optimization task by stacking a three-layer model as a deep learning structure. The first layer of the deep structure is a Long Short Term Memory (LSTM) model which deals with the sequential input data from a group of assets. The output of the LSTM model is followed by meanpooling, and the result is fed to the second layer. The second layer is a neural network layer, which further learns the feature representation. The output of the second layer is connected to a survival model as the third layer for predicting asset health condition. The parameters of the three-layer model are optimized together via stochastic gradient decent. The proposed method was tested on a small dataset collected from a fleet of mining haul trucks. The model resulted in the “individualized” failure probability representation for assessing the health condition of each individual asset, which well separates the in-service and failed trucks. The proposed method was also tested on a large open source hard drive dataset, and it showed promising result.",
"title": ""
},
{
"docid": "e287c89edaf97b11bac2d08cb4c6b385",
"text": "In this paper, we propose a new way of augmenting our environment with information without making the user carry any devices. We propose the use of video projection to display the augmentation on the objects directly. We use a projector that can be rotated and in other ways controlled remotely by a computer, to follow objects carrying a marker. The main contribution of this paper is a system that keeps the augmentation displayed in the correct place while the object or the projector moves. We describe the hardware and software design of our system, the way certain functions such as following the marker or keeping it in focus are implemented and how to calibrate the multitude of parameters of all the subsystems.",
"title": ""
},
{
"docid": "cf2fc7338a0a81e4c56440ec7c3c868e",
"text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.",
"title": ""
}
] |
scidocsrr
|
fd21b3dd5837fd60d4d96b5de059d8d7
|
C1-Continuous Terrain Reconstruction from Sparse Contours
|
[
{
"docid": "43db0f06e3de405657996b46047fa369",
"text": "Given two or more objects of general topology, intermediate objects are constructed by a distance field metamorphosis. In the presented method the interpolation of the distance field is guided by a warp function controlled by a set of corresponding anchor points. Some rules for defining a smooth least-distorting warp function are given. To reduce the distortion of the intermediate shapes, the warp function is decomposed into a rigid rotational part and an elastic part. The distance field interpolation method is modified so that the interpolation is done in correlation with the warp function. The method provides the animator with a technique that can be used to create a set of models forming a smooth transition between pairs of a given sequence of keyframe models. The advantage of the new approach is that it is capable of morphing between objects having a different topological genus where no correspondence between the geometric primitives of the models needs to be established. The desired correspondence is defined by an animator in terms of a relatively small number of anchor points",
"title": ""
}
] |
[
{
"docid": "114e2a9d3b502164ad06cbde59b682b6",
"text": "As the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses significant challenge to construct a high performance implementations of deep learning neural networks. In order to improve the performance as well as to maintain the low power cost, in this paper we design deep learning accelerator unit (DLAU), which is a scalable accelerator architecture for large-scale deep learning networks using field-programmable gate array (FPGA) as the hardware prototype. The DLAU accelerator employs three pipelined processing units to improve the throughput and utilizes tile techniques to explore locality for deep learning applications. Experimental results on the state-of-the-art Xilinx FPGA board demonstrate that the DLAU accelerator is able to achieve up to $36.1 {\\times }$ speedup comparing to the Intel Core2 processors, with the power consumption at 234 mW.",
"title": ""
},
{
"docid": "70d0f96d42467e1c998bb9969de55a39",
"text": "RGB-D cameras provide both a color image and a depth image which contains the real depth information about per-pixel. The richness of their data and the development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a novel hybrid visual odometry using an RGB-D camera. Different from the original method, it is a pure visual odometry method without any other information, such as inertial data. The important key is hybrid, which means that the odometry can be executed in two different processes depending on the conditions. It consists of two parts, including a feature-based visual odometry and a direct visual odometry. Details about the algorithm are discussed in the paper. Especially, the switch conditions are described in detail. Beside, we evaluate the continuity and robustness for the system on public dataset. The experiments demonstrate that our system has more stable continuity and better robustness.",
"title": ""
},
{
"docid": "c4fe9fd7e506e18f1a38bc71b7434b99",
"text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.",
"title": ""
},
{
"docid": "121fc3a009e8ce2938f822ba437bdaa3",
"text": "Due to an increased awareness and significant environmental pressures from various stakeholders, companies have begun to realize the significance of incorporating green practices into their daily activities. This paper proposes a framework using Fuzzy TOPSIS to select green suppliers for a Brazilian electronics company; our framework is built on the criteria of green supply chain management (GSCM) practices. An empirical analysis is made, and the data are collected from a set of 12 available suppliers. We use a fuzzy TOPSIS approach to rank the suppliers, and the results of the proposed framework are compared with the ranks obtained by both the geometric mean and the graded mean methods of fuzzy TOPSIS methodology. Then a Spearman rank correlation coefficient is used to find the statistical difference between the ranks obtained by the three methods. Finally, a sensitivity analysis has been performed to examine the influence of the preferences given by the decision makers for the chosen GSCM practices on the selection of green suppliers. Results indicate that the four dominant criteria are Commitment of senior management to GSCM; Product designs that reduce, reuse, recycle, or reclaim materials, components, or energy; Compliance with legal environmental requirements and auditing programs; and Product designs that avoid or reduce toxic or hazardous material use. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c138108f567d7f2dd130b6209b11caef",
"text": "Autotuning using relay feedback is widely used to identify low order integrating plus dead time (IPDT) systems as the method is simple and is operated in closed-loop without interrupting the production process. Oscillatory responses from the process due to ideal relay input are collected to calculate ultimate properties of the system that in turn are used to model the responses as functions of system model parameters. These theoretical models of relay response are validated. After adjusting the phase shift, input and output responses are used to find land mark points that are used to formulate algorithms for parameter estimation of the process model. The method is even applicable to distorted relay responses due to load disturbance or measurement noise. Closed-loop simulations are carried out using model based control strategy and performances are calculated.",
"title": ""
},
{
"docid": "4096f6d67acee9dd0eb472d8bf405e7b",
"text": "Emojis are used frequently in social media. A widely assumed view is that emojis express the emotional state of the user, which has led to research focusing on the expressiveness of emojis independent from the linguistic context. We argue that emojis and the linguistic texts can modify the meaning of each other. The overall communicated meaning is not a simple sum of the two channels. In order to study the meaning interplay, we need data indicating the overall sentiment of the entire message as well as the sentiment of the emojis stand-alone. We propose that Facebook Reactions are a good data source for such a purpose. FB reactions (e.g. “Love” and “Angry”) indicate the readers’ overall sentiment, against which we can investigate the types of emojis used the comments under different reaction profiles. We present a data set of 21,000 FB posts (57 million reactions and 8 million comments) from public media pages across four countries.",
"title": ""
},
{
"docid": "18851774e598f4cb66dbc770abe4a83f",
"text": "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.",
"title": ""
},
{
"docid": "3364f6fab787e3dbcc4cb611960748b8",
"text": "Filamentous fungi can each produce dozens of secondary metabolites which are attractive as therapeutics, drugs, antimicrobials, flavour compounds and other high-value chemicals. Furthermore, they can be used as an expression system for eukaryotic proteins. Application of most fungal secondary metabolites is, however, so far hampered by the lack of suitable fermentation protocols for the producing strain and/or by low product titers. To overcome these limitations, we report here the engineering of the industrial fungus Aspergillus niger to produce high titers (up to 4,500 mg • l−1) of secondary metabolites belonging to the class of nonribosomal peptides. For a proof-of-concept study, we heterologously expressed the 351 kDa nonribosomal peptide synthetase ESYN from Fusarium oxysporum in A. niger. ESYN catalyzes the formation of cyclic depsipeptides of the enniatin family, which exhibit antimicrobial, antiviral and anticancer activities. The encoding gene esyn1 was put under control of a tunable bacterial-fungal hybrid promoter (Tet-on) which was switched on during early-exponential growth phase of A. niger cultures. The enniatins were isolated and purified by means of reverse phase chromatography and their identity and purity proven by tandem MS, NMR spectroscopy and X-ray crystallography. The initial yields of 1 mg • l−1 of enniatin were increased about 950 fold by optimizing feeding conditions and the morphology of A. niger in liquid shake flask cultures. Further yield optimization (about 4.5 fold) was accomplished by cultivating A. niger in 5 l fed batch fermentations. Finally, an autonomous A. niger expression host was established, which was independent from feeding with the enniatin precursor d-2-hydroxyvaleric acid d-Hiv. This was achieved by constitutively expressing a fungal d-Hiv dehydrogenase in the esyn1-expressing A. niger strain, which used the intracellular α-ketovaleric acid pool to generate d-Hiv. This is the first report demonstrating that A. niger is a potent and promising expression host for nonribosomal peptides with titers high enough to become industrially attractive. Application of the Tet-on system in A. niger allows precise control on the timing of product formation, thereby ensuring high yields and purity of the peptides produced.",
"title": ""
},
{
"docid": "4fa25fd7088d9b624be75239d02cfc4b",
"text": "Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: 1) control bandwidth decreases about an order of magnitude at each higher level, 2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, 3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and 4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level.",
"title": ""
},
{
"docid": "4054713a00a9a2af6eb65f56433a943e",
"text": "The question why deep learning algorithms perform so well in practice has attracted increasing research interest. However, most of well-established approaches, such as hypothesis capacity, robustness or sparseness, have not provided complete explanations, due to the high complexity of the deep learning algorithms and their inherent randomness. In this work, we introduce a new approach – ensemble robustness – towards characterizing the generalization performance of generic deep learning algorithms. Ensemble robustness concerns robustness of the population of the hypotheses that may be output by a learning algorithm. Through the lens of ensemble robustness, we reveal that a stochastic learning algorithm can generalize well as long as its sensitiveness to adversarial perturbation is bounded in average, or equivalently, the performance variance of the algorithm is small. Quantifying ensemble robustness of various deep learning algorithms may be difficult analytically. However, extensive simulations for seven common deep learning algorithms for different network architectures provide supporting evidence for our claims. Furthermore, our work explains the good performance of several published deep learning algorithms.",
"title": ""
},
{
"docid": "31ef6c21c9877df266e0fd0506c3e90a",
"text": "My research has centered around understanding the colorful appearance of physical and digital paintings and images. My work focuses on decomposing images or videos into more editable data structures called layers, to enable efficient image or video re-editing. Given a time-lapse painting video, we can recover translucent layer strokes from every frame pairs by maximizing translucency of layers for its maximum re-usability, under either digital color compositing model or a physically inspired nonlinear color layering model, after which, we apply a spatial-temporal clustering on strokes to obtain semantic layers for further editing, such as global recoloring and local recoloring, spatial-temporal gradient recoloring and so on. With a single image input, we use the convex shape geometry intuition of color points distribution in RGB space, to help extract a small size palette from a image and then solve an optimization to extract translucent RGBA layers, under digital alpha compositing model. The translucent layers are suitable for global and local image recoloring and new object insertion as layers efficiently. Alternatively, we can apply an alternating least square optimization to extract multi-spectral physical pigment parameters from a single digitized physical painting image, under a physically inspired nonlinear color mixing model, with help of some multi-spectral pigment parameters priors. With these multi-spectral pigment parameters and their mixing layers, we demonstrate tonal adjustments, selection masking, recoloring, physical pigment understanding, palette summarization and edge enhancement. Our recent ongoing work introduces an extremely scalable and efficient yet simple palette-based image decomposition algorithm to extract additive mixing layers from single image. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer updating GUI. We also present a palette-based framework for color composition for visual applications, such as image and video harmonization, color transfer and so on.",
"title": ""
},
{
"docid": "3e8146798f6415a04d4fb5cf3f2f7c3d",
"text": "The retinal vasculature is composed of the arteries and veins with their tributaries which are visible within the retinal image. The segmentation and measurement of the retinal vasculature is of primary interest in he diagnosis and treatment of a number of systemic and ophthalmologic conditions. The accurate segmentati on of the retinal blood vessels is often an essential prerequisite step in the identification of retinal anatomy and pathology. In this study, we present an automated a pproach for blood vessels extraction using mathemat ical morphology. Two main steps are involved: enhancemen t operation is applied to the original retinal imag e in order to remove the noise and increase contrast of retinal blood vessels and morphology operations are employed to extract retinal blood vessels. This ope ration of segmentation is applied to binary image o f tophat transformation. The result was compared with ot her algorithms and give better results.",
"title": ""
},
{
"docid": "54d0e29425cad2b80db426b9c59632f7",
"text": "Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this article, we present a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection, and human pose estimation.",
"title": ""
},
{
"docid": "3b05004828d71f1b69d80cb25e165d7f",
"text": "Mapping in the GPS-denied environment is an important and challenging task in the field of robotics. In the large environment, mapping can be significantly accelerated by multiple robots exploring different parts of the environment. Accordingly, a key problem is how to integrate these local maps built by different robots into a single global map. In this paper, we propose an approach for simultaneous merging of multiple grid maps by the robust motion averaging. The main idea of this approach is to recover all global motions for map merging from a set of relative motions. Therefore, it firstly adopts the pair-wise map merging method to estimate relative motions for grid map pairs. To obtain as many reliable relative motions as possible, a graph-based sampling scheme is utilized to efficiently remove unreliable relative motions obtained from the pair-wise map merging. Subsequently, the accurate global motions can be recovered from the set of reliable relative motions by the motion averaging. Experimental results carried on real robot data sets demonstrate that proposed approach can achieve simultaneous merging of multiple grid maps with good performances.",
"title": ""
},
{
"docid": "d380a5de56265c80309733370c612316",
"text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.",
"title": ""
},
{
"docid": "3ea35f018869f02209105200f78d03b4",
"text": "We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future.",
"title": ""
},
{
"docid": "b96e2dba118d89942990337df26c7b20",
"text": "This paper introduces a high-speed all-hardware scale-invariant feature transform (SIFT) architecture with parallel and pipeline technology for real-time extraction of image features. The task-level parallel and pipeline structure are exploited between the hardware blocks, and the data-level parallel and pipeline architecture are exploited inside each block. Two identical random access memories are adopted with ping-pong operation to execute the key point detection module and the descriptor generation module in task-level parallelism. With speeding up the key point detection module of SIFT, the descriptor generation module has become the bottleneck of the system's performance; therefore, this paper proposes an optimized descriptor generation algorithm. A novel window-dividing method is proposed with square subregions arranged in 16 directions, and the descriptors are generated by reordering the histogram instead of window rotation. Therefore, the main orientation detection block and descriptor generation block run in parallel instead of interactively. With the optimized algorithm cooperating with pipeline structure inside each block, we not only improve the parallelism of the algorithm, but also avoid floating data calculation to save hardware consumption. Thus, the descriptor generation module leads the speed almost 15 times faster than a recent solution. The proposed system was implemented on field programmable gate array and the overall time to extract SIFT features for an image having 512×512 pixels is only 6.55 ms (sufficient for real-time applications), and the number of feature points can reach up to 2900.",
"title": ""
},
{
"docid": "2c05e7e3fa1e76d89763eb1ba4af672a",
"text": "Pooling second-order local feature statistics to form a high-dimensional bilinear feature has been shown to achieve state-of-the-art performance on a variety of fine-grained classification tasks. To address the computational demands of high feature dimensionality, we propose to represent the covariance features as a matrix and apply a low-rank bilinear classifier. The resulting classifier can be evaluated without explicitly computing the bilinear feature map which allows for a large reduction in the compute time as well as decreasing the effective number of parameters to be learned. To further compress the model, we propose a classifier co-decomposition that factorizes the collection of bilinear classifiers into a common factor and compact per-class terms. The co-decomposition idea can be deployed through two convolutional layers and trained in an end-to-end architecture. We suggest a simple yet effective initialization that avoids explicitly first training and factorizing the larger bilinear classifiers. Through extensive experiments, we show that our model achieves state-of-the-art performance on several public datasets for fine-grained classification trained with only category labels. Importantly, our final model is an order of magnitude smaller than the recently proposed compact bilinear model [8], and three orders smaller than the standard bilinear CNN model [19].",
"title": ""
},
{
"docid": "4d832a8716aebf7c36ae6894ce1bac33",
"text": "Autonomous vehicles require a reliable perception of their environment to operate in real-world conditions. Awareness of moving objects is one of the key components for the perception of the environment. This paper proposes a method for detection and tracking of moving objects (DATMO) in dynamic environments surrounding a moving road vehicle equipped with a Velodyne laser scanner and GPS/IMU localization system. First, at every time step, a local 2.5D grid is built using the last sets of sensor measurements. Along time, the generated grids combined with localization data are integrated into an environment model called local 2.5D map. In every frame, a 2.5D grid is compared with an updated 2.5D map to compute a 2.5D motion grid. A mechanism based on spatial properties is presented to suppress false detections that are due to small localization errors. Next, the 2.5D motion grid is post-processed to provide an object level representation of the scene. The detected moving objects are tracked over time by applying data association and Kalman filtering. The experiments conducted on different sequences from KITTI dataset showed promising results, demonstrating the applicability of the proposed method.",
"title": ""
},
{
"docid": "c64751968597299dc5622f589742c37d",
"text": "OpenFlow switching and Network Operating System (NOX) have been proposed to support new conceptual networking trials for fine-grained control and visibility. The OpenFlow is expected to provide multi-layer networking with switching capability of Ethernet, MPLS, and IP routing. NOX provides logically centralized access to high-level network abstraction and exerts control over the network by installing flow entries in OpenFlow compatible switches. The NOX, however, is missing the necessary functions for QoS-guaranteed software defined networking (SDN) service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. In this paper, we propose a QoS-aware Network Operating System (QNOX) for SDN with Generalized OpenFlows. The functional modules and operations of QNOX for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX is also analyzed to confirm that the proposed framework can be applied for carrier grade large scale provider Internet1.",
"title": ""
}
] |
scidocsrr
|
7f651143d12c48f89cabb80fa77c9541
|
An inexpensive scheme for calibration of a colour monitor in terms of CIE standard coordinates
|
[
{
"docid": "daa4114fe8ba064e816db1d579808fee",
"text": "Digital control of color television monitors—in particular, via frame buffers—has added precise control of a large subset of human colorspace to the capabilities of computer graphics. This subset is the gamut of colors spanned by the red, green, and blue (RGB) electron guns exciting their respective phosphors. It is called the RGB monitor gamut. Full-blown color theory is a quite complex subject involving physics, psychology, and physiology, but restriction to the RGB monitor gamut simplifies matters substantially. It is linear, for example, and admits to familiar spatial representations. This paper presents a set of alternative models of the RGB monitor gamut based on the perceptual variables hue (H), saturation (S), and value (V) or brightness (L). Algorithms for transforming between these models are derived. Particular emphasis is placed on an RGB to HSV non-trigonometric pair of transforms which have been used successfully for about four years in frame buffer painting programs. These are fast, accurate, and adequate in many applications. Computationally more difficult transform pairs are sometimes necessary, however. Guidelines for choosing among the models are provided. Psychophysical corrections are described within the context of the definitions established by the NTSC (National Television Standards Committee).",
"title": ""
}
] |
[
{
"docid": "a36944b193ca1b2423010017b08d5d2c",
"text": "Hand washing is a critical activity in preventing the spread of infection in health-care environments and food preparation areas. Several guidelines recommended a hand washing protocol consisting of six steps that ensure that all areas of the hands are thoroughly cleaned. In this paper, we describe a novel approach that uses a computer vision system to measure the user’s hands motions to ensure that the hand washing guidelines are followed. A hand washing quality assessment system needs to know if the hands are joined or separated and it has to be robust to different lighting conditions, occlusions, reflections and changes in the color of the sink surface. This work presents three main contributions: a description of a system which delivers robust hands segmentation using a combination of color and motion analysis, a single multi-modal particle filter (PF) in combination with a k-means-based clustering technique to track both hands/arms, and the implementation of a multi-class classification of hand gestures using a support vector machine ensemble. PF performance is discussed and compared with a standard Kalman filter estimator. Finally, the global performance of the system is analyzed and compared with human performance, showing an accuracy close to that of human experts.",
"title": ""
},
{
"docid": "779d7109ec18866dde21e5ef8e2911cb",
"text": "The purpose of this study is to provide conceptual order and a tool for the use of computermediated communication (CMC) and computer conferencing in supporting an educational experience. Central to the study introduced here is a model of community inquiry that constitutes three elements essential to an educational transactionÐcognitive presence, social presence, and teaching presence. Indicators (key words/phrases) for each of the three elements emerged from the analysis of computer-conferencing transcripts. The indicators described represent a template or tool for researchers to analyze written transcripts, as well as a guide to educators for the optimal use of computer conferencing as a medium to facilitate an educational transaction. This research would suggest that computer conferencing has considerable potential to create a community of inquiry for educational purposes.",
"title": ""
},
{
"docid": "2f138f030565d85e4dcd9f90585aecb0",
"text": "One of the central questions in neuroscience is how particular tasks, or computations, are implemented by neural networks to generate behavior. The prevailing view has been that information processing in neural networks results primarily from the properties of synapses and the connectivity of neurons within the network, with the intrinsic excitability of single neurons playing a lesser role. As a consequence, the contribution of single neurons to computation in the brain has long been underestimated. Here we review recent work showing that neuronal dendrites exhibit a range of linear and nonlinear mechanisms that allow them to implement elementary computations. We discuss why these dendritic properties may be essential for the computations performed by the neuron and the network and provide theoretical and experimental examples to support this view. 503 A nn u. R ev . N eu ro sc i. 20 05 .2 8: 50 353 2. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by M as sa ch us et ts I ns tit ut e of T ec hn ol og y (M IT ) on 0 6/ 26 /1 4. F or p er so na l u se o nl y. AR245-NE28-18 ARI 13 May 2005 14:15",
"title": ""
},
{
"docid": "9aa24f6e014ac5104c5b9ff68dc45576",
"text": "The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% F-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption. Keywords—Social networks; rumors; inquiry comments; question identification",
"title": ""
},
{
"docid": "4bcfc77dabf9c0545fb28059a6df40c8",
"text": "Over the past decade, machine learning techniques have made substantial advances in many domains. In health care, global interest in the potential of machine learning has increased; for example, a deep learning algorithm has shown high accuracy in detecting diabetic retinopathy.1 There have been suggestions that machine learning will drive changes in health care within a few years, specifically in medical disciplines that require more accurate prognostic models (eg, oncology) and those based on pattern recognition (eg, radiology and pathology). However, comparative studies on the effectiveness of machine learning–based decision support systems (ML-DSS) in medicine are lacking, especially regarding the effects on health outcomes. Moreover, the introduction of new technologies in health care has not always been straightforward or without unintended and adverse effects.2 In this Viewpoint we consider the potential unintended consequences that may result from the application of ML-DSS in clinical practice.",
"title": ""
},
{
"docid": "1667c7e872bac649051bb45fc85e9921",
"text": "Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.",
"title": ""
},
{
"docid": "1f95cc7adafe07ad9254359ab405a980",
"text": "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.",
"title": ""
},
{
"docid": "d3d471b6b377d8958886a2f6c89d5061",
"text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.",
"title": ""
},
{
"docid": "c68729167831b81a2d694664a4cfa90b",
"text": "Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robot's motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational efficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensor's characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-the-art methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicle's trajectory in real-time.",
"title": ""
},
{
"docid": "f9a3f69cf26b279fa8600fd2ebbc3426",
"text": "We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: \"Are there any apples in the fridge?\" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question. Popular reinforcement learning approaches with a single controller perform poorly on IQA owing to the large and diverse state space. We propose the Hierarchical Interactive Memory Network (HIMN), consisting of a factorized set of controllers, allowing the system to operate at multiple levels of temporal abstraction. To evaluate HIMN, we introduce IQUAD V1, a new dataset built upon AI2-THOR [35], a simulated photo-realistic environment of configurable indoor scenes with interactive objects. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration. Our experiments show that our proposed model outperforms popular single controller based methods on IQUAD V1. For sample questions and results, please view our video: https://youtu.be/pXd3C-1jr98.",
"title": ""
},
{
"docid": "b52cadf9e20eebfd388c09c51cff2d74",
"text": "Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L∞ metric (it’s highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decisionbased, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L∞ perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.",
"title": ""
},
{
"docid": "f0a82f428ac508351ffa7b97bb909b60",
"text": "Automated Teller Machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, and cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. This paper provides a general review on studies, efforts and development in ATMs location and cash management problem. Keywords—ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs.",
"title": ""
},
{
"docid": "834a0c043799097579441a0ca4713eea",
"text": "As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.",
"title": ""
},
{
"docid": "a45c93e89cc3df3ebec59eb0c81192ec",
"text": "We study a variant of the capacitated vehicle routing problem where the cost over each arc is defined as the product of the arc length and the weight of the vehicle when it traverses that arc. We propose two new mixed integer linear programming formulations for the problem: an arc-load formulation and a set partitioning formulation based on q-routes with additional constraints. A family of cycle elimination constraints are derived for the arc-load formulation. We then compare the linear programming (LP) relaxations of these formulations with the twoindex one-commodity flow formulation proposed in the literature. In particular, we show that the arc-load formulation with the new cycle elimination constraints gives the same LP bound as the set partitioning formulation based on 2-cycle-free q-routes, which is stronger than the LP bound given by the two-index one-commodity flow formulation. We propose a branchand-cut algorithm for the arc-load formulation, and a branch-cut-and-price algorithm for the set partitioning formulation strengthened by additional constraints. Computational results on instances from the literature demonstrate that a significant improvement can be achieved by the branch-cut-and-price algorithm over other methods.",
"title": ""
},
{
"docid": "413131b87073a9a9b025e457a0e9323e",
"text": "In this paper, we consider an anthropomorphically-inspired hybrid model of a bipedal robot with locking knees and feet in order to develop a control law that results in human-like walking. The presence of feet results in periods of full actuation and periods of underactuation during the course of a step. Properties of each of these phases of walking are utilized in order to achieve a stable walking gait. In particular, we will show that using controlled symmetries in the fully-actuated domains coupled with “partial” controlled symmetries and local ankle control laws in the underactuated domains yields stable walking; this result is possible due to the amount of time which the biped spends in the fully-actuated domains. The paper concludes with simulation results along with a comparison of these results to human walking data.",
"title": ""
},
{
"docid": "7731315bb30b1888caf4be87aa38a108",
"text": "The problem of scheduling is concerned with searching for optimal (or near-optimal) schedules subject to a number of constraints. A variety of approaches have been developed to solve the problem of scheduling. However, many of these approaches are often impractical in dynamic real-world environments where there are complex constraints and a variety of unexpected disruptions. In most real-world environments, scheduling is an ongoing reactive process where the presence of real-time information continually forces reconsideration and revision of pre-established schedules. Scheduling research has largely ignored this problem, focusing instead on optimisation of static schedules. This paper outlines the limitations of static approaches to scheduling in the presence of real-time information and presents a number of issues that have come up in recent years on dynamic scheduling. The paper defines the problem of dynamic scheduling and provides a review of the state of the art of currently developing research on dynamic scheduling. The principles of several dynamic scheduling techniques, namely, dispatching rules, heuristics, meta-heuristics, artificial intelligence techniques, and multi-agent systems are described in detail, followed by a discussion and comparison of their potential.",
"title": ""
},
{
"docid": "be4a4e3385067ce8642ff83ed76c4dcf",
"text": "We examine what makes a search system domain-specific and find that previous definitions are incomplete. We propose a new definition of domain specific search, together with a corresponding model, to assist researchers, systems designers and system beneficiaries in their analysis of their own domain. This model is then instantiated for two domains: intellectual property search (i.e. patent search) and medical or healthcare search. For each of the two we follow the theoretical model and identify outstanding issues. We find that the choice of dimensions is still an open issue, as linear independence is often absent and specific use-cases, particularly those related to interactive IR, still cannot be covered by the proposed model.",
"title": ""
},
{
"docid": "b885526ab7db7d7ed502698758117c80",
"text": "Cancer, more than any other human disease, now has a surfeit of potential molecular targets poised for therapeutic exploitation. Currently, a number of attractive and validated cancer targets remain outside of the reach of pharmacological regulation. Some have been described as undruggable, at least by traditional strategies. In this article, we outline the basis for the undruggable moniker, propose a reclassification of these targets as undrugged, and highlight three general classes of this imposing group as exemplars with some attendant strategies currently being explored to reclassify them. Expanding the spectrum of disease-relevant targets to pharmacological manipulation is central to reducing cancer morbidity and mortality.",
"title": ""
},
{
"docid": "6a3210307c98b4311271c29da142b134",
"text": "Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research.",
"title": ""
}
] |
scidocsrr
|
c1e1190b69745661acab613b09a58e77
|
The Gridfit algorithm: an efficient and effective approach to visualizing large amounts of spatial data
|
[
{
"docid": "6073601ab6d6e1dbba7a42c346a29436",
"text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.",
"title": ""
}
] |
[
{
"docid": "743825cd8bf6df1f77049b827b004616",
"text": "The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.",
"title": ""
},
{
"docid": "20c38b308892442744628905cc5f6bd2",
"text": "In this paper, we report the results for the experiments we carried out to automatically extract \"problem solved concepts\" from a patent document. We introduce two approaches for finding important information in a patent document. The main focus of our work is to devise methods that can efficiently find the problems an invention solves, as this can help in searching for the prior art and can be used as a mechanism for relevance feedback. We have used software and business process patents to carry out our studies.",
"title": ""
},
{
"docid": "98e8a120c393ac669f03f86944c81068",
"text": "In this paper, we investigate deep neural networks for blind motion deblurring. Instead of regressing for the motion blur kernel and performing non-blind deblurring outside of the network (as most methods do), we propose a compact and elegant end-to-end deblurring network. Inspired by the data-driven sparse-coding approaches that are capable of capturing linear dependencies in data, we generalize this notion by embedding non-linearities into the learning process. We propose a new architecture for blind motion deblurring that consists of an autoencoder that learns the data prior, and an adversarial network that attempts to generate and discriminate between clean and blurred features. Once the network is trained, the generator learns a blur-invariant data representation which when fed through the decoder results in the final deblurred output.",
"title": ""
},
{
"docid": "3bc998aa2dd0a531cf2c449b7fe66996",
"text": "Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to \"out vote\" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the \"social network \"among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small \"cut\" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally.",
"title": ""
},
{
"docid": "184c15d2c68ae91c372e74a6aec29582",
"text": "BACKGROUND\nSkilled attendance at childbirth is crucial for decreasing maternal and neonatal mortality, yet many women in low- and middle-income countries deliver outside of health facilities, without skilled help. The main conceptual framework in this field implicitly looks at home births with complications. We expand this to include \"preventive\" facility delivery for uncomplicated childbirth, and review the kinds of determinants studied in the literature, their hypothesized mechanisms of action and the typical findings, as well as methodological difficulties encountered.\n\n\nMETHODS\nWe searched PubMed and Ovid databases for reviews and ascertained relevant articles from these and other sources. Twenty determinants identified were grouped under four themes: (1) sociocultural factors, (2) perceived benefit/need of skilled attendance, (3) economic accessibility and (4) physical accessibility.\n\n\nRESULTS\nThere is ample evidence that higher maternal age, education and household wealth and lower parity increase use, as does urban residence. Facility use in the previous delivery and antenatal care use are also highly predictive of health facility use for the index delivery, though this may be due to confounding by service availability and other factors. Obstetric complications also increase use but are rarely studied. Quality of care is judged to be essential in qualitative studies but is not easily measured in surveys, or without linking facility records with women. Distance to health facilities decreases use, but is also difficult to determine. Challenges in comparing results between studies include differences in methods, context-specificity and the substantial overlap between complex variables.\n\n\nCONCLUSION\nStudies of the determinants of skilled attendance concentrate on sociocultural and economic accessibility variables and neglect variables of perceived benefit/need and physical accessibility. To draw valid conclusions, it is important to consider as many influential factors as possible in any analysis of delivery service use. The increasing availability of georeferenced data provides the opportunity to link health facility data with large-scale household data, enabling researchers to explore the influences of distance and service quality.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "337356e428bbe1c0275a87cd0290de82",
"text": "Finding a parking space in San Francisco City Area is really a headache issue. We try to find a reliable way to give parking information by prediction. We reveals the effect of aggregation on prediction for parking occupancy in San Francisco. Different empirical aggregation levels are tested with several prediction models. Moreover it proposes a sufficient condition leading to prediction error decreasing. Due to the aggregation effect, we would like to explore patterns inside parking. Thus daily occupancy profiles are also investigated to understand travelers behavior in the city.",
"title": ""
},
{
"docid": "ab66d7e267072432d1015e36260c9866",
"text": "Deep Neural Networks (DNNs) are the current state of the art for various tasks such as object detection, natural language processing and semantic segmentation. These networks are massively parallel, hierarchical models with each level of hierarchy performing millions of operations on a single input. The enormous amount of parallel computation makes these DNNs suitable for custom acceleration. Custom accelerators can provide real time inference of DNNs at low power thus enabling widespread embedded deployment. In this paper, we present Snowflake, a high efficiency, low power accelerator for DNNs. Snowflake was designed to achieve optimum occupancy at low bandwidths and it is agnostic to the network architecture. Snowflake was implemented on the Xilinx Zynq XC7Z045 APSoC and achieves a peak performance of 128 G-ops/s. Snowflake is able to maintain a throughput of 98 FPS on AlexNet while averaging 1.2 GB/s of memory bandwidth.",
"title": ""
},
{
"docid": "a7ca3ffcae09ad267281eb494532dc54",
"text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.",
"title": ""
},
{
"docid": "ae151d8ed9b8f99cfe22e593f381dd3b",
"text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
},
{
"docid": "e1ab544e1a00cc6b2f7797f65e084378",
"text": "This research investigates how to introduce synchronous interactive peer learning into an online setting appropriate both for crowdworkers (learning new tasks) and students in massive online courses (learning course material). We present an interaction framework in which groups of learners are formed on demand and then proceed through a sequence of activities that include synchronous group discussion about learner-generated responses. Via controlled experiments with crowdworkers, we show that discussing challenging problems leads to better outcomes than working individually, and incentivizing people to help one another yields still better results. We then show that providing a mini-lesson in which workers consider the principles underlying the tested concept and justify their answers leads to further improvements. Combining the mini-lesson with the discussion of the multiple-choice question leads to significant improvements on that question. We also find positive subjective responses to the peer interactions, suggesting that discussions can improve morale in remote work or learning settings.",
"title": ""
},
{
"docid": "9961f44d4ab7d0a344811186c9234f2c",
"text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.",
"title": ""
},
{
"docid": "0fb41d794c68c513f81d5396d3f05bf4",
"text": "Previous work on question-answering systems has mainly focused on answering individual questions, assuming they are independent and devoid of context. Instead, we investigate sequential question answering, in which multiple related questions are asked sequentially. We introduce a new dataset of fully humanauthored questions. We extend existing strong question answering frameworks to include information about previous asked questions to improve the overall question-answering accuracy in open-domain question answering. The dataset is publicly available at http:// sequential.qanta.org.",
"title": ""
},
{
"docid": "f4d9190ad9123ddcf809f47c71225162",
"text": "Please cite this article in press as: Tseng, M Industrial Engineering (2009), doi:10.1016/ Selection of appropriate suppliers in supply chain management strategy (SCMS) is a challenging issue because it requires battery of evaluation criteria/attributes, which are characterized with complexity, elusiveness, and uncertainty in nature. This paper proposes a novel hierarchical evaluation framework to assist the expert group to select the optimal supplier in SCMS. The rationales for the evaluation framework are based upon (i) multi-criteria decision making (MCDM) analysis that can select the most appropriate alternative from a finite set of alternatives with reference to multiple conflicting criteria, (ii) analytic network process (ANP) technique that can simultaneously take into account the relationships of feedback and dependence of criteria, and (iii) choquet integral—a non-additive fuzzy integral that can eliminate the interactivity of expert subjective judgment problems. A case PCB manufacturing firm is studied and the results indicated that the proposed evaluation framework is simple and reasonable to identify the primary criteria influencing the SCMS, and it is effective to determine the optimal supplier even with the interactive and interdependent criteria/attributes. This hierarchical evaluation framework provides a complete picture in SCMS contexts to both researchers and practitioners. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "52fc069497d79f97e3470f6a9f322151",
"text": "We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.",
"title": ""
},
{
"docid": "65bc99201599ec17347d3fe0857cd39a",
"text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.",
"title": ""
},
{
"docid": "b00ec93bf47aab14aa8ced69612fc39a",
"text": "In today’s increasingly rich material life, people are shifting their focus from the physical world to the spiritual world. In order to identify and care for people’s emotions, human-machine interaction systems have been created. The currently available human-machine interaction systems often support the interaction between human and robot under the line-of-sight (LOS) propagation environment, while most communications in terms of human-to-human and human-to-machine are non-LOS (NLOS). In order to break the limitation of the traditional human–machine interaction system, we propose the emotion communication system based on NLOS mode. Specifically, we first define the emotion as a kind of multimedia which is similar to voice and video. The information of emotion can not only be recognized, but can also be transmitted over a long distance. Then, considering the real-time requirement of the communications between the involved parties, we propose an emotion communication protocol, which provides a reliable support for the realization of emotion communications. We design a pillow robot speech emotion communication system, where the pillow robot acts as a medium for user emotion mapping. Finally, we analyze the real-time performance of the whole communication process in the scene of a long distance communication between a mother-child users’ pair, to evaluate the feasibility and effectiveness of emotion communications.",
"title": ""
},
{
"docid": "509fe613e25c9633df2520e4c3a62b74",
"text": "This study, in an attempt to rise above the intricacy of 'being informed on the verge of globalization,' is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper favors not judging against the superiority of human translation vs. machine translation or automated translation in non-English speaking settings, but rather referring to the inadequacies and adequacies of MT at certain pragmatic levels, lacking the right sense and dynamic equivalence, but producing syntactically well-formed or meaning-extractable outputs in restricted settings. Reasoning in this way, the present study supports MT before, during, and after translation. It aims at making translators understand that they could cooperate with the software to obtain a synergistic effect. In other words, they could have a say and have an essential part to play in a semi-automated translation process (Rodrigo, 2001). In this respect, semi-automated translation or MT courses should be included in the curricula of translation departments worldwide to keep track of the state of the art as well as make potential translators aware of future trends.",
"title": ""
}
] |
scidocsrr
|
1c8bc147bc7b7558c5517d3268d750bf
|
Antenna Array Design for Multi-Gbps mmWave Mobile Broadband Communication
|
[
{
"docid": "ddab10d66473ac7c4de26e923bf59083",
"text": "Phased arrays allow electronic scanning of the antenna beam. However, these phased arrays are not widely used due to a high implementation cost. This article discusses the advantages of the RF architecture and the implementation of silicon RFICs for phased-array transmitters/receivers. In addition, this work also demonstrates how silicon RFICs can play a vital role in lowering the cost of phased arrays.",
"title": ""
}
] |
[
{
"docid": "3de5b395bd2d21d35e03603faf0cd869",
"text": "Facial expression is central to human experience, but most previous databases and studies are limited to posed facial behavior under controlled conditions. In this paper, we present a novel facial expression database, Real-world Affective Face Database (RAF-DB), which contains approximately 30 000 facial images with uncontrolled poses and illumination from thousands of individuals of diverse ages and races. During the crowdsourcing annotation, each image is independently labeled by approximately 40 annotators. An expectation–maximization algorithm is developed to reliably estimate the emotion labels, which reveals that real-world faces often express compound or even mixture emotions. A cross-database study between RAF-DB and CK+ database further indicates that the action units of real-world emotions are much more diverse than, or even deviate from, those of laboratory-controlled emotions. To address the recognition of multi-modal expressions in the wild, we propose a new deep locality-preserving convolutional neural network (DLP-CNN) method that aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatter. Benchmark experiments on 7-class basic expressions and 11-class compound expressions, as well as additional experiments on CK+, MMI, and SFEW 2.0 databases, show that the proposed DLP-CNN outperforms the state-of-the-art handcrafted features and deep learning-based methods for expression recognition in the wild. To promote further study, we have made the RAF database, benchmarks, and descriptor encodings publicly available to the research community.",
"title": ""
},
{
"docid": "87e56672751a8eb4d5a08f0459e525ca",
"text": "— The Internet of Things (IoT) has transformed many aspects of modern manufacturing, from design to production to quality control. In particular, IoT and digital manufacturing technologies have substantially accelerated product development cycles and manufacturers can now create products of a complexity and precision not heretofore possible. New threats to supply chain security have arisen from connecting machines to the Internet and introducing complex IoT-based systems controlling manufacturing processes. By attacking these IoT-based manufacturing systems and tampering with digital files, attackers can manipulate physical characteristics of parts and change the dimensions, shapes, or mechanical properties of the parts, which can result in parts that fail in the field. These defects increase manufacturing costs and allow silent problems to occur only under certain loads that can threaten safety and/or lives. To understand potential dangers and protect manufacturing system safety, this paper presents two taxonomies: one for classifying cyber-physical attacks against manufacturing processes and another for quality control measures for counteracting these attacks. We systematically identify and classify possible cyber-physical attacks and connect the attacks with variations in manufacturing processes and quality control measures. Our tax-onomies also provide a scheme for linking emerging IoT-based manufacturing system vulnerabilities to possible attacks and quality control measures.",
"title": ""
},
{
"docid": "b715ca28f59e8a16dad408f4d29aa9c6",
"text": "Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks—at the level of small network subgraphs—remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns.",
"title": ""
},
{
"docid": "f4e171367606f2fe3ea91060333c6257",
"text": "To remain independent and healthy, an important factor to consider is the maintenance of skeletal muscle mass. Inactivity leads to measurable changes in muscle and bone, reduces exercise capacity, impairs the immune system, and decreases the sensitivity to insulin. Therefore, maintaining physical activity is of great importance for skeletal muscle health. One form of structured physical activity is resistance training. Generally speaking, one needs to lift weights at approximately 70% of their one repetition maximum (1RM) to have noticeable increases in muscle size and strength. Although numerous positive effects are observed from heavy resistance training, some at risk populations (e.g. elderly, rehabilitating patients, etc.) might be advised not to perform high-load resistance training and may be limited to performance of low-load resistance exercise. A technique which applies pressure cuffs to the limbs causing blood flow restriction (BFR) has been shown to attenuate atrophy and when combined with low intensity exercise has resulted in an increase in both muscle size and strength across different age groups. We have provided an evidence based model of progression from bed rest to higher load resistance training, based largely on BFR literature concentrating on more at risk populations, to highlight a possible path to recovery.",
"title": ""
},
{
"docid": "fa2c69161ab7955a4cab6d08acc806fe",
"text": "Accurate time-series forecasting during high variance segments (e.g., holidays), is critical for anomaly detection, optimal resource allocation, budget planning and other related tasks. At Uber accurate prediction for completed trips during special events can lead to a more efficient driver allocation resulting in a decreased wait time for the riders. State of the art methods for handling this task often rely on a combination of univariate forecasting models (e.g., Holt-Winters) and machine learning methods (e.g., random forest). Such a system, however, is hard to tune, scale and add exogenous variables. Motivated by the recent resurgence of Long Short Term Memory networks we propose a novel endto-end recurrent neural network architecture that outperforms the current state of the art event forecasting methods on Uber data and generalizes well to a public M3 dataset used for time-series forecasting competitions.",
"title": ""
},
{
"docid": "4bb417fa4328b001abe8f5d2e35fa642",
"text": "The abundance of data posted to Twitter enables companies to extract useful information, such as Twitter users who are dissatisfied with a product. We endeavor to determine which Twitter users are potential customers for companies and would be receptive to product recommendations through the language they use in tweets after mentioning a product of interest. With Twitter’s API, we collected tweets from users who tweeted about mobile devices or cameras. An expert annotator determined whether each tweet was relevant to customer purchase behavior and whether a user, based on their tweets, eventually bought the product. For the relevance task, among four models, a feed-forward neural network yielded the best cross-validation accuracy of over 80% per product. For customer purchase prediction of a product, we observed improved performance with the use of sequential input of tweets to recurrent models, with an LSTM model being best; we also observed the use of relevance predictions in our model to be more effective with less powerful RNNs and on more difficult tasks.",
"title": ""
},
{
"docid": "c530b222761cc93f5e9e9c47bdb62731",
"text": "A function π : V → {1, . . . , k} is a broadcast coloring of order k if π(u) = π(v) implies that the distance between u and v is more than π(u). The minimum order of a broadcast coloring is called the broadcast chromatic number of G, and is denoted χb(G). In this paper we introduce this coloring and study its properties. In particular, we explore the relationship with the vertex cover and chromatic numbers. While there is a polynomial-time algorithm to determine whether χb(G) ≤ 3, we show that it is NP-hard to determine if χb(G) ≤ 4. We also determine the maximum broadcast chromatic number of a tree, and show that the broadcast chromatic number of the infinite grid is finite.",
"title": ""
},
{
"docid": "cff85c5ae5072723b83db4f0b18e123d",
"text": "This paper presents a novel broadband rectenna for ambient wireless energy harvesting over the frequency band from 1.8 to 2.5 GHz. First of all, the characteristics of the ambient radio-frequency energy are studied. The results are then used to aid the design of a new rectenna. A novel two-branch impedance matching circuit is introduced to enhance the performance and efficiency of the rectenna at a relatively low ambient input power level. A novel broadband dual-polarized cross-dipole antenna is proposed which has embedded harmonic rejection property and can reject the second and third harmonics to further improve the rectenna efficiency. The measured power sensitivity of this design is down to -35 dBm and the conversion efficiency reaches 55% when the input power to the rectifier is -10 dBm. It is demonstrated that the output power from the proposed rectenna is higher than the other published designs with a similar antenna size under the same ambient condition. The proposed broadband rectenna could be used to power many low-power electronic devices and sensors and found a range of potential applications.",
"title": ""
},
{
"docid": "7525eda115f8764d36b560fb40f7eb75",
"text": "Following the footsteps of SemEval-2014 Task 4 (Pontiki et al., 2014), SemEval-2015 too had a task dedicated to aspect-level sentiment analysis (Pontiki et al., 2015), which saw participation from over 25 teams. In Aspectbased Sentiment Analysis, the aim is to identify the aspects of entities and the sentiment expressed for each aspect. In this paper, we present a detailed description of our system, that stood 4th in Aspect Category subtask (slot 1), 7th in Opinion Target Expression subtask (slot 2) and 8th in Sentiment Polarity subtask (slot 3) on the Restaurant datasets.",
"title": ""
},
{
"docid": "a33bcc76a5b47d3416a241a1385dadee",
"text": "BACKGROUND\nThe widespread availability of new computational methods and tools for data analysis and predictive modeling requires medical informatics researchers and practitioners to systematically select the most appropriate strategy to cope with clinical prediction problems. In particular, the collection of methods known as 'data mining' offers methodological and technical solutions to deal with the analysis of medical data and construction of prediction models. A large variety of these methods requires general and simple guidelines that may help practitioners in the appropriate selection of data mining tools, construction and validation of predictive models, along with the dissemination of predictive models within clinical environments.\n\n\nPURPOSE\nThe goal of this review is to discuss the extent and role of the research area of predictive data mining and to propose a framework to cope with the problems of constructing, assessing and exploiting data mining models in clinical medicine.\n\n\nMETHODS\nWe review the recent relevant work published in the area of predictive data mining in clinical medicine, highlighting critical issues and summarizing the approaches in a set of learned lessons.\n\n\nRESULTS\nThe paper provides a comprehensive review of the state of the art of predictive data mining in clinical medicine and gives guidelines to carry out data mining studies in this field.\n\n\nCONCLUSIONS\nPredictive data mining is becoming an essential instrument for researchers and clinical practitioners in medicine. Understanding the main issues underlying these methods and the application of agreed and standardized procedures is mandatory for their deployment and the dissemination of results. Thanks to the integration of molecular and clinical data taking place within genomic medicine, the area has recently not only gained a fresh impulse but also a new set of complex problems it needs to address.",
"title": ""
},
{
"docid": "031c67b13cdd534074fb8a5028d07b2a",
"text": "I. MOTIVATION Deploying convolutional neural networks (CNNs) effectively in real-time applications often requires both high throughput and low power consumption. However, a state-of-the-art CNN typically performs about 10 FLOPs per evaluation [1]. Reducing this computational cost has become an essential challenge. Several prior studies have proposed pruning ineffectual features and weights statically, thus reducing the FLOPs [2]. Dedicated hardware accelerators have also been shown to improve performance by exploiting the sparsity in a CNN [3], [4], [5]. However, the aforementioned proposals suffer from increasing the irregularity of the computation at the finest granularity. We propose a novel approach to reduce CNN computation, called channel gating, which dynamically prunes the unnecessary computation specific to a particular image, while minimizing the accuracy loss and hardware modification. Intuitively, channel gating leverages the spatial information inside the input features to identify ineffective receptive fields and skip the corresponding computation by gating a fraction of the input channels. The paper makes the following major contributions: • We introduce the channel gating scheme, which dynamically prunes computation on input channels at receptive field level. • We propose an efficient single-pass training scheme to train the channel gating CNN from scratch, allowing the network to automatically learn an effective gating policy. • We demonstrate the benefits of introducing channel gating in CNNs empirically and get 66% and 60% reduction in FLOPs with 0.22% and 0.29% accuracy loss on the CIFAR-10/100 datasets respectively using a state-of-theart ResNet model. • We propose a specialized accelerator architecture, which improves the performance and energy efficiency of the channel gating CNN inference (Ongoing).",
"title": ""
},
{
"docid": "f56e465f5f45388e5f439d03bf5ec391",
"text": "In this article, the authors evaluate L. Kohlberg's (1984) cognitive- developmental approach to morality, find it wanting, and introduce a more pragmatic approach. They review research designed to evaluate Kohlberg's model, describe how they revised the model to accommodate discrepant findings, and explain why they concluded that it is poorly equipped to account for the ways in which people make moral decisions in their everyday lives. The authors outline in 11 propositions a framework for a new approach that is more attentive to the purposes that people use morality to achieve. People make moral judgments and engage in moral behaviors to induce themselves and others to uphold systems of cooperative exchange that help them achieve their goals and advance their interests.",
"title": ""
},
{
"docid": "e8d2fc861fd1b930e65d40f6ce763672",
"text": "Despite that burnout presents a serious burden for modern society, there are no diagnostic criteria. Additional difficulty is the differential diagnosis with depression. Consequently, there is a need to dispose of a burnout biomarker. Epigenetic studies suggest that DNA methylation is a possible mediator linking individual response to stress and psychopathology and could be considered as a potential biomarker of stress-related mental disorders. Thus, the aim of this review is to provide an overview of DNA methylation mechanisms in stress, burnout and depression. In addition to state-of-the-art overview, the goal of this review is to provide a scientific base for burnout biomarker research. We performed a systematic literature search and identified 25 pertinent articles. Among these, 15 focused on depression, 7 on chronic stress and only 3 on work stress/burnout. Three epigenome-wide studies were identified and the majority of studies used the candidate-gene approach, assessing 12 different genes. The glucocorticoid receptor gene (NR3C1) displayed different methylation patterns in chronic stress and depression. The serotonin transporter gene (SLC6A4) methylation was similarly affected in stress, depression and burnout. Work-related stress and depressive symptoms were associated with different methylation patterns of the brain derived neurotrophic factor gene (BDNF) in the same human sample. The tyrosine hydroxylase (TH) methylation was correlated with work stress in a single study. Additional, thoroughly designed longitudinal studies are necessary for revealing the cause-effect relationship of work stress, epigenetics and burnout, including its overlap with depression.",
"title": ""
},
{
"docid": "b123916f2795ab6810a773ac69bdf00b",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "45c3c54043337e91a44e71945f4d63dd",
"text": "Neutrophils are being increasingly recognized as an important element in tumor progression. They have been shown to exert important effects at nearly every stage of tumor progression with a number of studies demonstrating that their presence is critical to tumor development. Novel aspects of neutrophil biology have recently been elucidated and its contribution to tumorigenesis is only beginning to be appreciated. Neutrophil extracellular traps (NETs) are neutrophil-derived structures composed of DNA decorated with antimicrobial peptides. They have been shown to trap and kill microorganisms, playing a critical role in host defense. However, their contribution to tumor development and metastasis has recently been demonstrated in a number of studies highlighting NETs as a potentially important therapeutic target. Here, studies implicating NETs as facilitators of tumor progression and metastasis are reviewed. In addition, potential mechanisms by which NETs may exert these effects are explored. Finally, the ability to target NETs therapeutically in human neoplastic disease is highlighted.",
"title": ""
},
{
"docid": "da04fee07a2a3c29f007309b7d04e050",
"text": "This paper discusses the influence of the position feedback sensor technology in servo drive applications. Incremental encoders, resolvers, and sinusoidal encoders are considered. Different hardware and software techniques used for the processing of position measurement signals are presented. Advantages and drawbacks are discussed for each solution. The choice of the best suited position feedback sensor for a given motion control application mainly depends on the speed and position control requirements and on the motor type. The dynamic performances and the accuracy of speed and position servo loops are dramatically influenced by the position feedback sensor resolution and accuracy. These points are analyzed in detail in the paper. Stiffness, stability, servo loop bandwidth and response time are considered as performances evaluation criteria for a given servo drive equipped with various position sensors. Rotating and linear AC brushless servo motors are considered. Experimental results are given. 1) INTRODUCTION Due to the development of automation in manufacturing processes, the demands on accurate and fast machine-tools and robots are increasing. In response to these demands, high dynamic AC brushless servo-motors tend to be adopted in motion control systems, and servo drives are expected to deal with a great variety of applications with maximum performances [1]. According to these requirements, high resolution and high accuracy are required for the motor speed and position measurement in order to close the speed and position servo loops with the maximum obtainable bandwidth. Smooth rotation at very low speed is also necessary for the most demanding applications. The particular sensor used depends on the application requirements, however incremental encoders, resolvers and sinusoidal encoders are the most popular. An AC brushless servo drive consists of a permanent magnet synchronous motor (PMSM) equipped with a position sensor mounted on the motor shaft, as presented on figure 1. The position sensor is used for closing the speed and position servo loops and also for the motor currents commutation in order to control the motor torque. The current amplitude is proportional to the desired torque value and the current phase is tied to the rotor position. Figure 1 : Block diagram of an AC brushless servo drive Resolvers are preferred for rotating brushless AC servomotors used in robot applications, because they are very rugged and provide absolute position value suitable for the motor commutation as well as the speed and position feedback for the servo loops. Encoders are mainly used in machine tools applications when a high position accuracy is required for contouring and machining. Encoders are also chosen for direct drive (rotating or linear), because the load is directly mounted on the moving motor part. So, a high number of pulses is necessary in order to get the required position accuracy. However, when incremental encoders are used with brushless AC servomotors, additional Torque control & phase commutation Speed & position control P W M inverter Brushless motor Position sensor Shaft position Set point",
"title": ""
},
{
"docid": "91713d85bdccb2c06d7c50365bd7022c",
"text": "A 1 Mbit MRAM, a nonvolatile memory that uses magnetic tunnel junction (MJT) storage elements, has been characterized for total ionizing dose (TID) and single event latchup (SEL). Our results indicate that these devices show no single event latchup up to an effective LET of 84 MeV-cm2/mg (where our testing ended) and no bit failures to a TID of 75 krad (Si).",
"title": ""
},
{
"docid": "a757624e5fd2d4a364f484d55a430702",
"text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.",
"title": ""
},
{
"docid": "28c06956b3e84dd063cf41a153eff0f5",
"text": "Description Logics (DLs) are suitable, well-known, logics for managing structured knowledge. They allow reasoning about individuals and well defined concepts, i.e. set of individuals with common properties. The experience in using DLs in applications has shown that in many cases we would like to extend their capabilities. In particular, their use in the context of Multimedia Information Retrieval (MIR) leads to the convincement that such DLs should allow the treatment of the inherent imprecision in multimedia object content representation and retrieval. In this paper we will present a fuzzy extension of ALC, combining Zadeh’s fuzzy logic with a classical DL. In particular, concepts becomes fuzzy and, thus, reasoning about imprecise concepts is supported. We will define its syntax, its semantics, describe its properties and present a constraint propagation calculus for reasoning in it.",
"title": ""
}
] |
scidocsrr
|
2ab8b50dd1cfcfe9fe935bdf21cfcbf8
|
Adaptive Seeding in Social Networks
|
[
{
"docid": "3c54b07b159fabe4c3ca1813abfdae6f",
"text": "We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91% of individuals belonging to a single large connected component, and we confirm the ‘six degrees of separation’ phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which ‘your friends have more friends than you’. Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.",
"title": ""
}
] |
[
{
"docid": "a1f1d34e8ceeb984976e45074694d4c2",
"text": "This paper proposes a model of the doubly fed induction generator (DFIG) suitable for transient stability studies. The main assumption adopted in the model is that the current control loops, which are much faster than the electromechanic transients under study, do not have a significant influence on the transient stability of the power system and may be considered instantaneous. The proposed DFIG model is a set of algebraic equations which are solved using an iterative procedure. A method is also proposed to calculate the DFIG initial conditions. A detailed variable-speed windmill model has been developed using the proposed DFIG model. This windmill model has been integrated in a transient stability simulation program in order to demonstrate its feasibility. Several simulations have been performed using a base case which includes a small grid, a wind farm represented by a single windmill, and different operation points. The evolution of several electric variables during the simulations is shown and discussed.",
"title": ""
},
{
"docid": "aef40147787610c624b1a0ec80ac0454",
"text": "Business to Business (B2B) marketing aims at meeting the needs of other businesses instead of individual consumers. In B2B markets, the buying processes usually involve series of different marketing campaigns providing necessary information to multiple decision makers with different interests and motivations. The dynamic and complex nature of these processes imposes significant challenges to analyze the process logs for improving the B2B marketing practice. Indeed, most of the existing studies only focus on the individual consumers in the markets, such as movie/product recommender systems. In this paper, we exploit the temporal behavior patterns in the buying processes of the business customers and develop a B2B marketing campaign recommender system. Specifically, we first propose the temporal graph as the temporal knowledge representation of the buying process of each business customer. The key idea is to extract and integrate the campaign order preferences of the customer using the temporal graph. We then develop the low-rank graph reconstruction framework to identify the common graph patterns and predict the missing edges in the temporal graphs. We show that the prediction of the missing edges is effective to recommend the marketing campaigns to the business customers during their buying processes. Moreover, we also exploit the community relationships of the business customers to improve the performances of the graph edge predictions and the marketing campaign recommendations. Finally, we have performed extensive empirical studies on real-world B2B marketing data sets and the results show that the proposed method can effectively improve the quality of the campaign recommendations for challenging B2B marketing tasks.",
"title": ""
},
{
"docid": "bf44cc7e8e664f930edabf20ca06dd29",
"text": "Nowadays, our living environment is rich in radio-frequency energy suitable for harvesting. This energy can be used for supplying low-power consumption devices. In this paper, we analyze a new type of a Koch-like antenna which was designed for energy harvesting specifically. The designed antenna covers two different frequency bands (GSM 900 and Wi-Fi). Functionality of the antenna is verified by simulations and measurements.",
"title": ""
},
{
"docid": "be9971903bf3d754ed18cc89cf254bd1",
"text": "This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.",
"title": ""
},
{
"docid": "f6417f30a8f0358f73ac25e15c9016cd",
"text": "Due to large quantity of data needed for image synthesis in SAR applications, methods of raw signal compression were developed alongside actual imaging systems. Although performance of modern processing units allows on-platform, online image synthesis, data compressor still can be a valuable addition. Since it is no longer necessary part of SAR system, it should be delivered in a flexible, easy to use and low cost form - like low-resources demanding Intellectual Property core. In this paper chosen properties of raw SAR signal and some of compression methods are presented followed by compressor IP core implementation results.",
"title": ""
},
{
"docid": "4da24a06b91c53c730c7ec24b69c2980",
"text": "We review the development of diffuse-interface models of hydrodynamics and their application to a wide variety of interfacial phenomena. These models have been applied successfully to situations in which the physical phenomena of interest have a length scale commensurate with the thickness of the interfacial region (e.g. near-critical interfacial phenomena or small-scale flows such as those occurring near contact lines) and fluid flows involving large interface deformations and/or topological changes (e.g. breakup and coalescence events associated with fluid jets, droplets, and large-deformation waves). We discuss the issues involved in formulating diffuse-interface models for single-component and binary fluids. Recent applications and computations using these models are discussed in each case. Further, we address issues including sharp-interface analyses that relate these models to the classical free-boundary problem, computational approaches to describe interfacial phenomena, and models of fully miscible fluids.",
"title": ""
},
{
"docid": "9824b33621ad02c901a9e16895d2b1a6",
"text": "Objective This systematic review aims to summarize current evidence on which naturally present cannabinoids contribute to cannabis psychoactivity, considering their reported concentrations and pharmacodynamics in humans. Design Following PRISMA guidelines, papers published before March 2016 in Medline, Scopus-Elsevier, Scopus, ISI-Web of Knowledge and COCHRANE, and fulfilling established a-priori selection criteria have been included. Results In 40 original papers, three naturally present cannabinoids (∆-9-Tetrahydrocannabinol, ∆-8-Tetrahydrocannabinol and Cannabinol) and one human metabolite (11-OH-THC) had clinical relevance. Of these, the metabolite produces the greatest psychoactive effects. Cannabidiol (CBD) is not psychoactive but plays a modulating role on cannabis psychoactive effects. The proportion of 9-THC in plant material is higher (up to 40%) than in other cannabinoids (up to 9%). Pharmacodynamic reports vary due to differences in methodological aspects (doses, administration route and volunteers' previous experience with cannabis). Conclusions Findings reveal that 9-THC contributes the most to cannabis psychoactivity. Due to lower psychoactive potency and smaller proportions in plant material, other psychoactive cannabinoids have a weak influence on cannabis final effects. Current lack of standard methodology hinders homogenized research on cannabis health effects. Working on a standard cannabis unit considering 9-THC is recommended.",
"title": ""
},
{
"docid": "da70ea31d80e8ba3e61fb2627cf7fc34",
"text": "The aim of this study was to assess the threshold value for the perception of color changes of human gingiva. Standardized presentations of five cases in the esthetic zone were made with the gingiva and teeth separated. The color parameters L, a, and b (CIELab) of the gingival layers were adjusted to induce darker and lighter colors. In the presentations, the right side (maxillary right anterior) was unchanged, while the left side (maxillary left anterior) of the pictures was modified. Ten dentists, 10 dental technicians, and 10 lay people evaluated the color difference of the pictures. The mean ΔE threshold values ranged between 1.6 ± 1.1 (dental technicians) and 3.4 ± 1.9 (lay people). The overall ΔE amounted to 3.1 ± 1.5.",
"title": ""
},
{
"docid": "5b301f12f2d3fa52003b4b30654669f2",
"text": "This paper presents the design and measurement results of a single-chip front-end monolithic microwave integrated circuit (MMIC), incorporating a high-power amplifier, transmit–receive switch, low-noise amplifier, and calibration coupler, realized in 0.25 $\\mu \\text{m}$ AlGaN/GaN-on-SiC MMIC technology of UMS (GH25-10). The MMIC is operating in C-band (5.2–5.6 GHz) and is targeting the next generation spaceborne synthetic aperture radar. The use of GaN technology has resulted in a design that is robust against antenna load variation in transmit as well as against high received power levels, without the need for an additional limiter. By including a transmit-receive switch on the MMIC there is no need for an external circulator, resulting in a significant size and weight reduction of the transmit–receive module. The measured output power in transmit is higher than 40 W with 36% PAE. The receive gain is higher than 31 dB with better than 2.4 dB noise figure. To the best of the author’s knowledge this is the first time such performance has been demonstrated for a single-chip implementation of a C-band transmit–receive front-end.",
"title": ""
},
{
"docid": "d01a22301de1274220a16351d14d4d83",
"text": "In this paper, we propose a solution to the problems and the features encountered in the geometric modeling of the 6 DOF manipulator arm, the Fanuc. Among these, the singularity of the Jacobian matrix obtained by the kinematic model and which has a great influence on the boundaries and accessibility of the workspace of manipulator robot and it reduce the number of solutions found. We can decompose it into several sub-matrices of smaller dimensions, for ultimately a non-linear equation with two unknowns. We validate our work by conducting a simulation software platform that allows us to verify the results of manipulation in a virtual reality environment based on VRML and Matlab software, integration with the CAD model.",
"title": ""
},
{
"docid": "275a367d6064409180836967ec1513d2",
"text": "Recent advances in signal processing for the detection of Steady-State Visual Evoked Potentials (SSVEPs) have moved away from traditionally calibrationless methods, such as canonical correlation analysis, and towards algorithms that require substantial training data. In general, this has improved detection rates, but SSVEP-based brain-computer interfaces (BCIs) now suffer from the requirement of costly calibration sessions. Here, we address this issue by applying transfer learning techniques to SSVEP detection. Our novel Adaptive-C3A method incorporates an unsupervised adaptation algorithm that requires no calibration data. Our approach learns SSVEP templates for the target user and provides robust class separation in feature space leading to increased classification accuracy. Our method achieves significant improvements in performance over a standard CCA method as well as a transfer variant of the state-of-the art Combined-CCA method for calibrationless SSVEP detection.",
"title": ""
},
{
"docid": "9b71d11e2096008bc3603c62d89e452e",
"text": "Abstract In the present study biodiesel was synthesized from Waste Cook Oil (WCO) by three-step method and regressive analyzes of the process was done. The raw oil, containing 1.9wt% Free Fatty Acid (FFA) and viscosity was 47.6mm/s. WCO was collected from local restaurant of Sylhet city in Bangladesh. Transesterification method gives lower yield than three-step method. In the three-step method, the first step is saponification of the oil followed by acidification to produce FFA and finally esterification of FFA to produce biodiesel. In the saponification reaction, various reaction parameters such as oil to sodium hydroxide molar ratio and reaction time were optimized and the oil to NaOH molar ratio was 1:2, In the esterification reaction, the reaction parameters such as methanol to FFA molar ratio, catalyst concentration and reaction temperature were optimized. Silica gel was used during esterification reaction to adsorb water produced in the reaction. Hence the reaction rate was increased and finally the FFA was reduced to 0.52wt%. A factorial design was studied for esterification reaction based on yield of biodiesel. Finally various properties of biodiesel such as FFA, viscosity, specific gravity, cetane index, pour point, flash point etc. were measured and compared with biodiesel and petro-diesel standard. The reaction yield was 79%.",
"title": ""
},
{
"docid": "8bd0c280a95f549bd5596fb1f7499e44",
"text": "Mobile devices are becoming ubiquitous. People take pictures via their phone cameras to explore the world on the go. In many cases, they are concerned with the picture-related information. Understanding user intent conveyed by those pictures therefore becomes important. Existing mobile applications employ visual search to connect the captured picture with the physical world. However, they only achieve limited success due to the ambiguity nature of user intent in the picture-one picture usually contains multiple objects. By taking advantage of multitouch interactions on mobile devices, this paper presents a prototype of interactive mobile visual search, named TapTell, to help users formulate their visual intent more conveniently. This kind of search leverages limited yet natural user interactions on the phone to achieve more effective visual search while maintaining a satisfying user experience. We make three contributions in this work. First, we conduct a focus study on the usage patterns and concerned factors for mobile visual search, which in turn leads to the interactive design of expressing visual intent by gesture. Second, we introduce four modes of gesture-based interactions (crop, line, lasso, and tap) and develop a mobile prototype. Third, we perform an in-depth usability evaluation on these different modes, which demonstrates the advantage of interactions and shows that lasso is the most natural and effective interaction mode. We show that TapTell provides a natural user experience to use phone camera and gesture to explore the world. Based on the observation and conclusion, we also suggest some design principles for interactive mobile visual search in the future.",
"title": ""
},
{
"docid": "6c8efc45b1e5a3c8e9953eeed9756cbb",
"text": "Formulation of proper and efficient algorithms for robot kinematics is essential for the analysis and design of serial manipulators. Kinematic modeling of manipulators is most often performed in Cartesian space. However, due to disadvantages of most widely used mathematical constructs for description of orientation such as Euler angles and rotational matrices, a need for unambiguous, compact, singularity free, computationally efficient method for representing rotational information is imposed. As a solution, unit quaternions are proposed and kinematic modeling in dual quaternion space arose. In this paper, an overview of spatial descriptions and transformations that can be applied together within these spaces in order to solve kinematic problems is presented. Special emphasis is on a different mathematical formalisms used to represent attitude of a rigid body such as rotation matrix, Euler angles, axis-angle representation, unit quaternions, and their mutual relation. Benefits of kinematic modeling in quaternion space are presented. New direct kinematics algorithm in dual quaternion space pertaining to a particular manipulator is given. These constructs and algorithms are demonstrated on the human centrifuge as 3 DoF robot manipulator.",
"title": ""
},
{
"docid": "def89ddd4342d517285f4623c488ea1f",
"text": "We discuss deep reinforcement learning in an overview style. We draw a big picture, filled with details. We discuss six core elements, six important mechanisms, and twelve applications, focusing on contemporary work, and in historical contexts. We start with background of artificial intelligence, machine learning, deep learning, and reinforcement learning (RL), with resources. Next we discuss RL core elements, including value function, policy, reward, model, exploration vs. exploitation, and representation. Then we discuss important mechanisms for RL, including attention and memory, unsupervised learning, hierarchical RL, multiagent RL, relational RL, and learning to learn. After that, we discuss RL applications, including games, robotics, natural language processing (NLP), computer vision, finance, business management, healthcare, education, energy, transportation, computer systems, and, science, engineering, and art. Finally we summarize briefly, discuss challenges and opportunities, and close with an epilogue. 1",
"title": ""
},
{
"docid": "5547f8ad138a724c2cc05ce65f50ebd2",
"text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.",
"title": ""
},
{
"docid": "2b16725c22f06b8155ce948636877004",
"text": "The Internet of Things (IoT) aims to connect billions of smart objects to the Internet, which can bring a promising future to smart cities. These objects are expected to generate large amounts of data and send the data to the cloud for further processing, especially for knowledge discovery, in order that appropriate actions can be taken. However, in reality sensing all possible data items captured by a smart object and then sending the complete captured data to the cloud is less useful. Further, such an approach would also lead to resource wastage (e.g., network, storage, etc.). The Fog (Edge) computing paradigm has been proposed to counterpart the weakness by pushing processes of knowledge discovery using data analytics to the edges. However, edge devices have limited computational capabilities. Due to inherited strengths and weaknesses, neither Cloud computing nor Fog computing paradigm addresses these challenges alone. Therefore, both paradigms need to work together in order to build a sustainable IoT infrastructure for smart cities. In this article, we review existing approaches that have been proposed to tackle the challenges in the Fog computing domain. Specifically, we describe several inspiring use case scenarios of Fog computing, identify ten key characteristics and common features of Fog computing, and compare more than 30 existing research efforts in this domain. Based on our review, we further identify several major functionalities that ideal Fog computing platforms should support and a number of open challenges toward implementing them, to shed light on future research directions on realizing Fog computing for building sustainable smart cities.",
"title": ""
},
{
"docid": "17192a9edb1e6eb3d9809d432d2d38bc",
"text": "Purpose This concept paper presents the process of constructing a language tailored to describing insider threat incidents, for the purposes of mitigating threats originating from legitimate users in an IT infrastructure. Various information security surveys indicate that misuse by legitimate (insider) users has serious implications for the health of IT environments. A brief discussion of survey data and insider threat concepts is followed by an overview of existing research efforts to mitigate this particular problem. None of the existing insider threat mitigation frameworks provide facilities for systematically describing the elements of misuse incidents, and thus all threat mitigation frameworks could benefit from the existence of a domain specific language for describing legitimate user actions. The paper presents a language development methodology which centres upon ways to abstract the insider threat domain and approaches to encode the abstracted information into language semantics. Due to lack of suitable insider case repositories, and the fact that most insider misuse frameworks have not been extensively implemented in practice, the aforementioned language construction methodology is based upon observed information security survey trends and the study of existing insider threat and intrusion specification frameworks. The development of a domain specific language goes through various stages of refinement that might eventually contradict these preliminary findings. Practical implications This paper summarizes the picture of the insider threat in IT infrastructures and provides a useful reference for insider threat modeling researchers by indicating ways to abstract insider threats. The problems of constructing insider threat signatures and utilizing them in insider threat models are also discussed.",
"title": ""
},
{
"docid": "ed6aec69b76444f877343277865e2fd0",
"text": "Abstract Context: In the context of exploring the art, science and engineering of programming, the question of which programming languages should be taught first has been fiercely debated since computer science teaching started in universities. Failure to grasp programming readily almost certainly implies failure to progress in computer science. Inquiry:What first programming languages are being taught? There have been regular national-scale surveys in Australia and New Zealand, with the only US survey reporting on a small subset of universities. This the first such national survey of universities in the UK. Approach: We report the results of the first survey of introductory programming courses (N = 80) taught at UK universities as part of their first year computer science (or related) degree programmes, conducted in the first half of 2016. We report on student numbers, programming paradigm, programming languages and environment/tools used, as well as the underpinning rationale for these choices. Knowledge: The results in this first UK survey indicate a dominance of Java at a time when universities are still generally teaching students who are new to programming (and computer science), despite the fact that Python is perceived, by the same respondents, to be both easier to teach as well as to learn. Grounding: We compare the results of this survey with a related survey conducted since 2010 (as well as earlier surveys from 2001 and 2003) in Australia and New Zealand. Importance: This survey provides a starting point for valuable pedagogic baseline data for the analysis of the art, science and engineering of programming, in the context of substantial computer science curriculum reform in UK schools, as well as increasing scrutiny of teaching excellence and graduate employability for UK universities.",
"title": ""
},
{
"docid": "5ea2508e3b9fb70d9613d8dc7d1ca093",
"text": "Provenance, a meta-data describing the derivation history of data, is crucial for the uptake of cloud computing to enhance reliability, credibility, accountability, transparency, and confidentiality of digital objects in a cloud. In this paper, we survey current mechanisms that support provenance for cloud computing, we classify provenance according to its granularities encapsulating the various sets of provenance data for different use cases, and we summarize the challenges and requirements for collecting provenance in a cloud, based on which we show the gap between current approaches to requirements. Additionally, we propose our approach, Data PROVE, that aims to effectively and efficiently satisfy those challenges and requirements in cloud provenance, and to provide a provenance supplemented cloud for better integrity and safety of customers' data.",
"title": ""
}
] |
scidocsrr
|
e73f9e0e2a492573894a4ab446117acf
|
Composing Relationships with Translations
|
[
{
"docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0",
"text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.",
"title": ""
}
] |
[
{
"docid": "55ce6b1ef6cf0335c8180347a17d73fd",
"text": "Live migration is a key technique for virtual machine (VM) management in data center networks, which enables flexibility in resource optimization, fault tolerance, and load balancing. Despite its usefulness, the live migration still introduces performance degradations during the migration process. Thus, there has been continuous efforts in reducing the migration time in order to minimize the impact. From the network's perspective, the migration time is determined by the amount of data to be migrated and the available bandwidth used for such transfer. In this paper, we examine the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time. We consider the problem in the Software-defined Network (SDN) context since it provides flexible control on routing. More specifically, we propose a method that computes the optimal migration sequence and network bandwidth used for each migration. We formulate this problem as a mixed integer programming, which is NP-hard. To make it computationally feasible for large scale data centers, we propose an approximation scheme via linear approximation plus fully polynomial time approximation, and obtain its theoretical performance bound. Through extensive simulations, we demonstrate that our fully polynomial time approximation (FPTA) algorithm has a good performance compared with the optimal solution and two state-of-the-art algorithms. That is, our proposed FPTA algorithm approaches to the optimal solution with less than 10% variation and much less computation time. Meanwhile, it reduces the total migration time and the service downtime by up to 40% and 20% compared with the state-of-the-art algorithms, respectively.",
"title": ""
},
{
"docid": "f36b101aa059792e21281bff8157568f",
"text": "Many research projects oriented on control mechanisms of virtual agents in videogames have emerged in recent years. However, this boost has not been accompanied with the emergence of toolkits supporting development of these projects, slowing down the progress in the field. Here, we present Pogamut 3, an open source platform for rapid development of behaviour for virtual agents embodied in a 3D environment of the Unreal Tournament 2004 videogame. Pogamut 3 is designed to support research as well as educational projects. The paper also briefly touches extensions of Pogamut 3; the ACT-R integration, the emotional model ALMA integration, support for control of avatars at the level of gestures, and a toolkit for developing educational scenarios concerning orientation in urban areas. These extensions make Pogamut 3 applicable beyond the domain of computer games.",
"title": ""
},
{
"docid": "1da747ae58d80c218811618be4538a7b",
"text": "Smartphones and other trendy mobile wearable devices are rapidly becoming the dominant sensing, computing and communication devices in peoples' daily lives. Mobile crowd sensing is an emerging technology based on the sensing and networking capabilities of such mobile wearable devices. MCS has shown great potential in improving peoples' quality of life, including healthcare and transportation, and thus has found a wide range of novel applications. However, user privacy and data trustworthiness are two critical challenges faced by MCS. In this article, we introduce the architecture of MCS and discuss its unique characteristics and advantages over traditional wireless sensor networks, which result in inapplicability of most existing WSN security solutions. Furthermore, we summarize recent advances in these areas and suggest some future research directions.",
"title": ""
},
{
"docid": "8fa258c448dbed5f8f1192e5e12e8c9d",
"text": "OBJECTIVE\nThe purpose of this study was to assess anatomic and functional results after the laparoscopic Davydov procedure for the creation of a neovagina in Rokitansky syndrome.\n\n\nSTUDY DESIGN\nThirty patients with Rokitansky syndrome underwent the laparoscopic Davydov technique from June 2005-August 2008. Mean follow-up time lasted 30 months (range, 6-44 months) and included clinical examinations and evaluation of the quality of sexual intercourse; vaginoscopy, Schiller's test, and neovaginal biopsies were performed after 6 and 12 months. Functional results were assessed with the use of Rosen's Female Sexual Function Index and were compared with age-matched normal control subjects.\n\n\nRESULTS\nNo perioperative complications occurred. At 6 months, anatomic success was achieved in 97% of the patients (n = 29); functional success and optimal results for the Female Sexual Function Index questionnaire were obtained in 96% of patients. Vaginoscopy and biopsy results showed a normal iodine-positive vaginal epithelium.\n\n\nCONCLUSION\nThe Davydov technique seems to be a safe and effective treatment for vaginal agenesis in patients with Rokitansky syndrome.",
"title": ""
},
{
"docid": "0b7718d4ed9c06536f7b120bc73b72ce",
"text": "The feasibility of a 1.2kV GaN switch based on two series-connected 650V GaN transistors is demonstrated in this paper. Aside to achieve ultra-fast transitions and reduced switching energy loss, stacking GaN transistors enables compatibility with high-voltage GaN-on-Silicon technologies. A proof-of-concept is provided by electrical characterization and hard-switching operation of a GaN Super-Cascode built with discrete components. Further investigations to enhance stability with auxiliary components are carried out by simulations and co-integrated prototypes are proven at wafer level.",
"title": ""
},
{
"docid": "16e6acd62753e8c0c206bde20f3cbe52",
"text": "In this paper we focus our attention on the comparison of various lemmatization and stemming algorithms, which are often used in nature language processing (NLP). Sometimes these two techniques are considered to be identical, but there is an important difference. Lemmatization is generally more utilizable, because it produces the basic word form which is required in many application areas (i.e. cross-language processing and machine translation). However, lemmatization is a difficult task especially for highly inflected natural languages having a lot of words for the same normalized word form. We present a novel lemmatization algorithm which utilizes the multilingual semantic thesaurus Eurowordnet (EWN). We describe the algorithm in detail and compare it with other widely used algorithms for word normalization on two different corpora. We present promising results obtained by our EWN-based lemmatization approach in comparison to other techniques. We also discuss the influence of the word normalization on classification task in general. In overall, the performance of our method is good and it achieves similar precision and recall in comparison with other word normalization methods. However, our experiments indicate that word normalization does not affect the text classification task significantly.",
"title": ""
},
{
"docid": "441e0a882bafc17a75fe9e2dbf3634f1",
"text": "Cloud computing focuses on delivery of reliable, secure, faulttolerant, sustainable, and scalable infrastructures for hosting internet-based application services. These applications have different composition, configuration, and deployment requirements. Cloud service providers are willing to provide large scaled computing infrastructure at a cheap prices. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. This problem can be tackle with the help of mobile agents. Mobile agent being a process that can transport its state from one environment to another, with its data intact, and is capable of performing appropriately in the new environment. This work proposes an agent based framework for providing scalability in cloud computing environments supported with algorithms for searching another cloud when the approachable cloud becomes overloaded and for searching closest datacenters with least response time of virtual machine (VM).",
"title": ""
},
{
"docid": "50c78e339e472f1b1814687f7d0ec8c6",
"text": "Frontonasal dysplasia (FND) refers to a class of midline facial malformations caused by abnormal development of the facial primordia. The term encompasses a spectrum of severities but characteristic features include combinations of ocular hypertelorism, malformations of the nose and forehead and clefting of the facial midline. Several recent studies have drawn attention to the importance of Alx homeobox transcription factors during craniofacial development. Most notably, loss of Alx1 has devastating consequences resulting in severe orofacial clefting and extreme microphthalmia. In contrast, mutations of Alx3 or Alx4 cause milder forms of FND. Whilst Alx1, Alx3 and Alx4 are all known to be expressed in the facial mesenchyme of vertebrate embryos, little is known about the function of these proteins during development. Here, we report the establishment of a zebrafish model of Alx-related FND. Morpholino knock-down of zebrafish alx1 expression causes a profound craniofacial phenotype including loss of the facial cartilages and defective ocular development. We demonstrate for the first time that Alx1 plays a crucial role in regulating the migration of cranial neural crest (CNC) cells into the frontonasal primordia. Abnormal neural crest migration is coincident with aberrant expression of foxd3 and sox10, two genes previously suggested to play key roles during neural crest development, including migration, differentiation and the maintenance of progenitor cells. This novel function is specific to Alx1, and likely explains the marked clinical severity of Alx1 mutation within the spectrum of Alx-related FND.",
"title": ""
},
{
"docid": "9aa0692e9fe89f844396e790b9ec4357",
"text": "This paper discusses methods for assigning codewords for the purpose of ngerprinting digital data (e.g., software, documents, and images). Fingerprinting consists of uniquely marking and registering each copy of the data. This marking allows a distributor to detect any unauthorized copy and trace it back to the user. This threat of detection will hopefully deter users from releasing unauthorized copies. A problem arises when users collude: For digital data, two di erent ngerprinted objects can be compared and the di erences between them detected. Hence, a set of users can collude to detect the location of the ngerprint. They can then alter the ngerprint to mask their identities. We present a general ngerprinting solution which is secure in the context of collusion. In addition, we discuss methods for distributing ngerprinted data.",
"title": ""
},
{
"docid": "d0e2597ff99ced212198a37d2b58d487",
"text": "We describe our experience of implementing a news content organization system at Tencent that discovers events from vast streams of breaking news and evolves news story structures in an online fashion. Our real-world system has distinct requirements in contrast to previous studies on topic detection and tracking (TDT) and event timeline or graph generation, in that we 1) need to accurately and quickly extract distinguishable events from massive streams of long text documents that cover diverse topics and contain highly redundant information, and 2) must develop the structures of event stories in an online manner, without repeatedly restructuring previously formed stories, in order to guarantee a consistent user viewing experience. In solving these challenges, we propose Story Forest, a set of online schemes that automatically clusters streaming documents into events, while connecting related events in growing trees to tell evolving stories. We conducted extensive evaluation based on 60 GB of real-world Chinese news data, although our ideas are not language-dependent and can easily be extended to other languages, through detailed pilot user experience studies. The results demonstrate the superior capability of Story Forest to accurately identify events and organize news text into a logical structure that is appealing to human readers, compared to multiple existing algorithm frameworks.",
"title": ""
},
{
"docid": "bb253cee8f3b8de7c90e09ef878434f3",
"text": "Under most widely-used security mechanisms the programs users run possess more authority than is strictly necessary, with each process typically capable of utilising all of the user’s privileges. Consequently such security mechanisms often fail to protect against contemporary threats, such as previously unknown (‘zero-day’) malware and software vulnerabilities, as processes can misuse a user’s privileges to behave maliciously. Application restrictions and sandboxes can mitigate threats that traditional approaches to access control fail to prevent by limiting the authority granted to each process. This developing field has become an active area of research, and a variety of solutions have been proposed. However, despite the seriousness of the problem and the security advantages these schemes provide, practical obstacles have restricted their adoption. This paper describes the motivation for application restrictions and sandboxes, presenting an indepth review of the literature covering existing systems. This is the most comprehensive review of the field to date. The paper outlines the broad categories of existing application-oriented access control schemes, such as isolation and rule-based schemes, and discusses their limitations. Adoption of these schemes has arguably been impeded by workflow, policy complexity, and usability issues. The paper concludes with a discussion on areas for future work, and points a way forward within this developing field of research with recommendations for usability and abstraction to be considered to a further extent when designing application-oriented access",
"title": ""
},
{
"docid": "84845323a1dcb318bb01fef5346c604d",
"text": "This paper introduced a centrifugal impeller-based wall-climbing robot with the μCOS-II System. Firstly, the climber's basic configurations of mechanical were described. Secondly, the mechanic analyses of walking mechanism was presented, which was essential to the suction device design. Thirdly, the control system including the PC remote control system and the STM32 master slave system was designed. Finally, an experiment was conducted to test the performance of negative pressure generating system and general abilities of wall-climbing robot.",
"title": ""
},
{
"docid": "49e5f9e36efb6b295868a307c1486c60",
"text": "This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem",
"title": ""
},
{
"docid": "0a4573a440eb40a667c18923b0f35636",
"text": "Article history: Received 31 October 2016 Received in revised form 4 May 2017 Accepted 5 July 2017 Available online xxxx",
"title": ""
},
{
"docid": "52a3688f1474b824a6696b03a8b6536c",
"text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed for significantly improving the accuracy of the credit scoring models. In this paper, two-stage genetic programming (2SGP) is proposed to deal with the credit scoring problem by incorporating the advantages of the IF–THEN rules and the discriminant function. On the basis of the numerical results, we can conclude that 2SGP can provide the better accuracy than other models. 2005 Published by Elsevier Inc. 0096-3003/$ see front matter 2005 Published by Elsevier Inc. doi:10.1016/j.amc.2005.05.027 * Corresponding author. Address: Institute of Management of Technology and Institute of Traffic and Transportation College of Management, National Chiao Tung University, 1001 TaHsueh Road, Hsinchu 300, Taiwan. E-mail address: u5460637@ms16.hinet.net (G.-H. Tzeng). 2 J.-J. Huang et al. / Appl. Math. Comput. xxx (2005) xxx–xxx ARTICLE IN PRESS",
"title": ""
},
{
"docid": "fa2a7c000f374741b6a9e5eea9a0d089",
"text": "for Open Networks, Closed Regimes \" Many hope that information technology will generate new opportunities for global communications, breaking down national barriers even in dictatorial regimes with minimal freedom of the press. Kalathil and Boas provide a path-breaking and thoughtful analysis of this issue. A fascinating study, this should be widely read by all concerned with understanding and promoting democratization, regime change, and new information technology. \" \" Through a country-by-country analysis, Kalathil and Boas shed light on practices formerly known only by anecdote, and their findings chip away at the apocryphal notion that going digital necessarily means going democratic. Their work answers a number of important questions, and frames a worthy challenge to those who wish to deploy technology for the cause of political openness. \" The Carnegie Endowment normally does not take institutional positions on public policy issues; the views and recommendations presented in this publication do not necessarily represent the views of the Carnegie Endowment, its officers, staff, or trustees.",
"title": ""
},
{
"docid": "6b94d16dd18c76d453af87d0f878fbfd",
"text": "In this paper, a rigorous design methodology is developed to design compact planar branch-line and rat-race couplers using asymmetrical T-structures. The quarter-wave transmission line, namely the basic element for realizing the coupler, can be replaced by the asymmetrical T-structure, which is composed of a low-impedance shunt stub and two series high-impedance lines with unequal electrical lengths. As compared with the use of the conventional symmetrical T-structure, employing the asymmetrical one to implement the coupler not only has the advantage of flexibly interleaving the shunt stubs to achieve a more compact circuit size, but also provides a wider return loss bandwidth. Based on the proposed designed methodology, the asymmetrical T-structure can be exactly synthesized and then applied to implement the compact planar couplers. The developed planar branch-line coupler occupies 12.2% of the conventional structure and has a 35.5% 10-dB return loss bandwidth. On the other hand, the rat-race coupler is miniaturized to a 5% circuit size and developed with a 29.5% 20-dB return loss bandwidth.",
"title": ""
},
{
"docid": "afc9fbf2db89a5220c897afcbabe028f",
"text": "Evidence for viewpoint-specific image-based object representations have been collected almost entirely using exemplar-specific recognition tasks. Recent results, however, implicate image-based processes in more categorical tasks, for instance when objects contain qualitatively different 3D parts. Although such discriminations approximate class-level recognition. they do not establish whether image-based representations can support generalization across members of an object class. This issue is critical to any theory of recognition, in that one hallmark of human visual competence is the ability to recognize unfamiliar instances of a familiar class. The present study addresses this questions by testing whether viewpoint-specific representations for some members of a class facilitate the recognition of other members of that class. Experiment 1 demonstrates that familiarity with several members of a class of novel 3D objects generalizes in a viewpoint-dependent manner to cohort objects from the same class. Experiment 2 demonstrates that this generalization is based on the degree of familiarity and the degree of geometrical distinctiveness for particular viewpoints. Experiment 3 demonstrates that this generalization is restricted to visually-similar objects rather than all objects learned in a given context. These results support the hypothesis that image-based representations are viewpoint dependent, but that these representations generalize across members of perceptually-defined classes. More generally, these results provide evidence for a new approach to image-based recognition in which object classes are represented as cluster of visually-similar viewpoint-specific representations.",
"title": ""
},
{
"docid": "832839144d44e0ca5359d312767a07f1",
"text": "Digit recognition can be difficult due to different writing styles of people. Therefore, there are a lot of feature extraction methods. In this paper, commonly used feature extraction methods in digit recognition and their performance are investigated. These methods are Gradient Features, Diagonal Features, Zoninig Features, Projection Histograms Features and Chaincode Features.",
"title": ""
}
] |
scidocsrr
|
8c2d04404d828edaeeb8eee42312bf41
|
Authentication Protocols for Internet of Things: A Comprehensive Survey
|
[
{
"docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f",
"text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.",
"title": ""
},
{
"docid": "955882547c8d7d455f3d0a6c2bccd2b4",
"text": "Recently there has been quite a number of independent research activities that investigate the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; iii) we analyze the characteristics of the SIoT network structure by means of simulations.",
"title": ""
}
] |
[
{
"docid": "6c12755ba2580d5d9b794b9a33c0304a",
"text": "A fundamental part of conducting cross-disciplinary web science research is having useful, high-quality datasets that provide value to studies across disciplines. In this paper, we introduce a large, hand-coded corpus of online harassment data. A team of researchers collaboratively developed a codebook using grounded theory and labeled 35,000 tweets. Our resulting dataset has roughly 15% positive harassment examples and 85% negative examples. This data is useful for training machine learning models, identifying textual and linguistic features of online harassment, and for studying the nature of harassing comments and the culture of trolling.",
"title": ""
},
{
"docid": "6eb8e1a391398788d9b4be294b8a70d1",
"text": "To improve software quality, researchers and practitioners have proposed static analysis tools for various purposes (e.g., detecting bugs, anomalies, and vulnerabilities). Although many such tools are powerful, they typically need complete programs where all the code names (e.g., class names, method names) are resolved. In many scenarios, researchers have to analyze partial programs in bug fixes (the revised source files can be viewed as a partial program), tutorials, and code search results. As a partial program is a subset of a complete program, many code names in partial programs are unknown. As a result, despite their syntactical correctness, existing complete-code tools cannot analyze partial programs, and existing partial-code tools are limited in both their number and analysis capability. Instead of proposing another tool for analyzing partial programs, we propose a general approach, called GRAPA, that boosts existing tools for complete programs to analyze partial programs. Our major insight is that after unknown code names are resolved, tools for complete programs can analyze partial programs with minor modifications. In particular, GRAPA locates Java archive files to resolve unknown code names, and resolves the remaining unknown code names from resolved code names. To illustrate GRAPA, we implement a tool that leverages the state-of-the-art tool, WALA, to analyze Java partial programs. We thus implemented the first tool that is able to build system dependency graphs for partial programs, complementing existing tools. We conduct an evaluation on 8,198 partial-code commits from four popular open source projects. Our results show that GRAPA fully resolved unknown code names for 98.5% bug fixes, with an accuracy of 96.1% in total. Furthermore, our results show the significance of GRAPA's internal techniques, which provides insights on how to integrate with more complete-code tools to analyze partial programs.",
"title": ""
},
{
"docid": "455a71e5358d03d5d4f3e7634db85eb2",
"text": "Part of Speech (POS) Tagging can be applied by several tools and several programming languages. This work focuses on the Natural Language Toolkit (NLTK) library in the Python environment and the gold standard corpora installable. The corpora and tagging methods are analyzed and compared by using the Python language. Different taggers are analyzed according to their tagging accuracies with data from three different corpora. In this study, we have analyzed Brown, Penn Treebank and NPS Chat corpuses. The taggers we have used for the analysis are; default tagger, regex tagger, n-gram taggers. We have applied all taggers to these three corpuses, resultantly we have shown that whereas Unigram tagger does the best tagging in all corpora, the combination of taggers does better if it is correctly ordered. Additionally, we have seen that NPS Chat Corpus gives different accuracy results than the other two corpuses.",
"title": ""
},
{
"docid": "d62469c5c49269cb7eb1dc379a674c4f",
"text": "Augmented Reality (AR) and Mobile Augmented Reality (MAR) applications have gained much research and industry attention these days. The mobile nature of MAR applications limits users’ interaction capabilities such as inputs, and haptic feedbacks. This survey reviews current research issues in the area of human computer interaction for MAR and haptic devices. The survey first presents human sensing capabilities and their applicability in AR applications. We classify haptic devices into two groups according to the triggered sense: cutaneous/tactile: touch, active surfaces, and mid-air; kinesthetic: manipulandum, grasp, and exoskeleton. Due to the mobile capabilities of MAR applications, we mainly focus our study on wearable haptic devices for each category and their AR possibilities. To conclude, we discuss the future paths that haptic feedbacks should follow for MAR applications and their challenges.",
"title": ""
},
{
"docid": "c8c82af8fc9ca5e0adac5b8b6a14031d",
"text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.",
"title": ""
},
{
"docid": "7c611108aa760808e6558b86394a5318",
"text": "Single-cell RNA sequencing (scRNA-seq) is a fast growing approach to measure the genome-wide transcriptome of many individual cells in parallel, but results in noisy data with many dropout events. Existing methods to learn molecular signatures from bulk transcriptomic data may therefore not be adapted to scRNA-seq data, in order to automatically classify individual cells into predefined classes. We propose a new method called DropLasso to learn a molecular signature from scRNA-seq data. DropLasso extends the dropout regularisation technique, popular in neural network training, to estimate sparse linear models. It is well adapted to data corrupted by dropout noise, such as scRNA-seq data, and we clarify how it relates to elastic net regularisation. We provide promising results on simulated and real scRNA-seq data, suggesting that DropLasso may be better adapted than standard regularisations to infer molecular signatures from scRNA-seq data. DropLasso is freely available as an R package at https://github.com/jpvert/droplasso",
"title": ""
},
{
"docid": "acc526dd0d86c5bf83034b3cd4c1ea38",
"text": "We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.",
"title": ""
},
{
"docid": "da287113f7cdcb8abb709f1611c8d457",
"text": "The paper describes a completely new topology for a low-speed, high-torque permanent brushless magnet machine. Despite being naturally air-cooled, it has a significantly higher torque density than a liquid-cooled transverse-flux machine, whilst its power factor is similar to that of a conventional permanent magnet brushless machine. The high torque capability and low loss density are achieved by combining the actions of a speed reducing magnetic gear and a high speed PM brushless machine within a highly integrated magnetic circuit. In this way, the magnetic limit of the machine is reached before its thermal limit. The principle of operation of such a dasiapseudopsila direct-drive machine is described, and measured results from a prototype machine are presented.",
"title": ""
},
{
"docid": "27fa3f76bd1e097afd389582ee929837",
"text": "Prevalence of morbid obesity is rising. Along with it, the adipose associated co-morbidities increase - included panniculus morbidus, the end stage of obesity of the abdominal wall. In the course of time panniculus often develop a herniation of bowel. An incarcerated hernia and acute exacerbation of a chronic inflammation of the panniculus must be treated immediately and presents a surgical challenge. The resection of such massive abdominal panniculus presents several technical problems to the surgeon. Preparation of long standing or fixed hernias may require demanding adhesiolysis. The wound created is huge and difficult to manage, and accompanied by considerable complications at the outset. We provide a comprehensive overview of a possible approach for panniculectomy and hernia repair and overlook of the existing literature.",
"title": ""
},
{
"docid": "b24babd50bd6c7592e272f387e89953a",
"text": "Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases. Previous sentence level denoise models don’t achieve satisfying performances because they use hard labels which are determined by distant supervision and immutable during training. To this end, we introduce an entity-pair level denoise method which exploits semantic information from correctly labeled entity pairs to correct wrong labels dynamically during training. We propose a joint score function which combines the relational scores based on the entity-pair representation and the confidence of the hard label to obtain a new label, namely a soft label, for certain entity pair. During training, soft labels instead of hard labels serve as gold labels. Experiments on the benchmark dataset show that our method dramatically reduces noisy instances and outperforms the state-of-the-art systems.",
"title": ""
},
{
"docid": "45c13af41bc3d1b5ba5ea678f9b2eb6f",
"text": "A new type of mobile robots with the inch worm mechanism is presented in this paper for inspecting pipelines from the outside of pipe surfaces under hostile environments. This robot, Mark 111, is made after the successful investigation of the prototypes, Mark I and 11, which can pass over obstacles on pipelines, such as flanges and T-joints and others. Newly developed robot, Mark 111, can move vertically along the pipeline and move to the adjacent pipeline for the inspection. The sensors, infra ray proximity sensor and ultra sonic sensors and others, are installed to detect these obstacles and can move autonomously controlled by the microprocessor. The control method of this robot can be carried out by the dual control mode proposed in this paper.",
"title": ""
},
{
"docid": "2b4caf3ecdcd78ac57d8acd5788084d2",
"text": "In the age of information network explosion, Along with the popularity of the Internet, users can link to all kinds of social networking sites anytime and anywhere to interact and discuss with others. This phenomenon indicates that social networking sites have become a platform for interactions between companies and customers so far. Therefore, with the above through social science and technology development trend arising from current social phenomenon, research of this paper, mainly expectations for analysis by the information of interaction between people on the social network, such as: user clicked fans pages, user's graffiti wall message information, friend clicked fans pages etc. Three kinds of personal information for personal preference analysis, and from this huge amount of personal data to find out corresponding diverse group for personal preference category. We can by personal preference information for diversify personal advertising, product recommendation and other services. The paper at last through the actual business verification, the research can improve website browsing pages growth 11%, time on site growth 15%, site bounce rate dropped 13.8%, product click through rate growth 43%, more fully represents the results of this research fit the use's preference.",
"title": ""
},
{
"docid": "0f39f88747145f730731bc8dd108b3ac",
"text": "To cope with increasing amount of cyber threats, organizations need to share cybersecurity information beyond the borders of organizations, countries, and even languages. Assorted organizations built repositories that store and provide XML-based cybersecurity information on the Internet. Among them are NVD [1], OSVDB [2], and JVN [3], and more cybersecurity information from various organizations from various countries will be available in the Internet. However, users are unaware of all of them. To advance information sharing, users need to be aware of them and be capable of identifying and locating cybersecurity information across such repositories by the parties who need that, and then obtaining the information over networks. This paper proposes a discovery mechanism, which identifies and locates sources and types of cybersecurity information and exchanges the information over networks. The mechanism uses the ontology of cybersecurity information [4] to incorporate assorted format of such information so that it can maintain future extensibility. It generates RDF-based metadata from XML-based cybersecurity information through the use of XSLT. This paper also introduces an implementation of the proposed mechanism and discusses extensibility and usability of the proposed mechanism.",
"title": ""
},
{
"docid": "12f79714c374fd7eb90e6a26af1ecbc1",
"text": "To contribute to a better understanding of L2 sentence processing, the present study examines how second language (L2) learners parse temporary ambiguous sentences containing relative clauses. Results are reported from both off-line and on-line experiments with three groups of advanced learners of Greek, with Spanish, German or Russian as native language (L1), as well as results from corresponding experiments with a control group of adult native speakers of Greek. We found that despite their native-like mastery of the construction under investigation, the L2 learners showed different relative clause attachment preferences than the native speakers. Moreover, the L2 learners did not exhibit L1-based preferences in L2 Greek, as might be expected if they were directly influenced by attachment preferences from their native language. We suggest that L2 learners integrate information relevant for parsing differently from native speakers, with the L2 learners relying more on lexical cues than the native speakers and less on purely structurally-based parsing strategies. L1 and L2 Sentence Processing 3",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "cd9382ca95b7584695a5ca41f436209f",
"text": "This paper presents a novel method for automated extraction of road markings directly from three dimensional (3-D) point clouds acquired by a mobile light detection and ranging (LiDAR) system. First, road surface points are segmented from a raw point cloud using a curb-based approach. Then, road markings are directly extracted from road surface points through multisegment thresholding and spatial density filtering. Finally, seven specific types of road markings are further accurately delineated through a combination of Euclidean distance clustering, voxel-based normalized cut segmentation, large-size marking classification based on trajectory and curb-lines, and small-size marking classification based on deep learning, and principal component analysis (PCA). Quantitative evaluations indicate that the proposed method achieves an average completeness, correctness, and F-measure of 0.93, 0.92, and 0.93, respectively. Comparative studies also demonstrate that the proposed method achieves better performance and accuracy than those of the two existing methods.",
"title": ""
},
{
"docid": "0d1193978e4f8be0b78c6184d7ece3fe",
"text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …",
"title": ""
},
{
"docid": "f519e878b3aae2f0024978489db77425",
"text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.",
"title": ""
},
{
"docid": "7cbaa0e8549373e3106ab01d5e3b9e71",
"text": "A compact asymmetric orthomode transducer (OMT) with high isolation between the vertical and horizontal Ports is developed for the X-band synthetic aperture radar application. The basic idea of the design is to deploy the combined E- and H-plane bends within the common arm. Moreover, an offset between each polarization axis is introduced to enhance the isolation and decrease the size to be around one-third of most of the existing asymmetric OMTs. The OMT achieves better than 22.5-dB matching level and 65-dB isolation level between the two modes. Good agreement is obtained between measurements and full-wave simulations.",
"title": ""
}
] |
scidocsrr
|
c16545885143bac2aa9920e7f9110463
|
Fact Checking: Task definition and dataset construction
|
[
{
"docid": "55f68a0bb97f11b579a33881452a9d7c",
"text": "Machine learning methods for classification problems commonly assume that the class values are unordered. However, in many practical applications the class values do exhibit a natural order—for example, when learning how to grade. The standard approach to ordinal classification converts the class value into a numeric quantity and applies a regression learner to the transformed data, translating the output back into a discrete class value in a post-processing step. A disadvantage of this method is that it can only be applied in conjunction with a regression scheme. In this paper we present a simple method that enables standard classification algorithms to make use of ordering information in class attributes. By applying it in conjunction with a decision tree learner we show that it outperforms the naive approach, which treats the class values as an unordered set. Compared to special-purpose algorithms for ordinal classification our method has the advantage that it can be applied without any modification to the underlying learning scheme.",
"title": ""
},
{
"docid": "8b9901a8abf1b25d199a704445ee3333",
"text": "We use logical inference techniques for recognising textual entailment. As the performance of theorem proving turns out to be highly dependent on not readily available background knowledge, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Finally, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap; the resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the different techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.",
"title": ""
},
{
"docid": "ccbb7e753b974951bb658b63e91431bb",
"text": "In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
}
] |
[
{
"docid": "312d05085096de7d4dfaaef815f35249",
"text": "Chemomechanical caries removal allies an atraumatic technique with antimicrobiotic characteristics, minimizing painful stimuli and maximally preserving healthy dental structures. The purpose of this study was to compare the cytotoxic effects of papain-based gel (Papacarie) and another caries-removing substance, Carisolv, to a nontreatment control on cultured fibroblasts in vitro and the biocompatibility in subcutaneous tissue in vivo. The cytotoxicity analysis was performed on fibroblast cultures (NIH-3T3) after 0-, 4-, 8-, and 12-hour exposure (cell viability assay) and after 1-, 3-, 5-, and 7-day exposure (survival assay). In the in vivo study, the 2 compounds were introduced into polyethylene tubes that were implanted into subcutaneous tissues of rats. After 1, 7, 14, 30, and 60 days, tissue samples were examined histologically. Cell viability did not differ between the 2 experimental groups. The control group, however, showed significantly higher percentage viability. There were no differences in cell survival between the control and experimental groups. The histological analysis revealed a moderate inflammatory response at 2 and 7 days and a mild response at 15 days, becoming almost imperceptible by 30 and 60 days in both experimental groups. The 2 tested substances exhibited acceptable biocompatibilities and demonstrated similar responses in the in vitro cytotoxicity and in vivo implantation assay.",
"title": ""
},
{
"docid": "f9c56d14c916bff37ab69bd949c30b04",
"text": "We have examined 365 versions of Linux. For every versio n, we counted the number of instances of common (global) coupling between each of the 17 kernel modules and all the other modules in that version of Linux. We found that the num ber of instances of common coupling grows exponentially with version number. This result is significant at the 99.99% level, and no additional variables are needed to explain this increase. On the other hand, the number of lines of code in each kernel modules grows only linearly with v ersion number. We conclude that, unless Linux is restructured with a bare minimum of common c upling, the dependencies induced by common coupling will, at some future date, make Linu x exceedingly hard to maintain without inducing regression faults.",
"title": ""
},
{
"docid": "3d72ed32a523f4c51b9c57b0d7d0f9ab",
"text": "A theoretical study on the design of broadbeam leaky-wave antennas (LWAs) of uniform type and rectilinear geometry is presented. A new broadbeam LWA structure based on the hybrid printed-circuit waveguide is proposed, which allows for the necessary flexible and independent control of the leaky-wave phase and leakage constants. The study shows that both the real and virtual focus LWAs can be synthesized in a simple manner by tapering the printed-slot along the LWA properly, but the real focus LWA is preferred in practice. Practical issues concerning the tapering of these LWA are investigated, including the tuning of the radiation pattern asymmetry level and beamwidth, the control of the ripple level inside the broad radiated main beam, and the frequency response of the broadbeam LWA. The paper provides new insight and guidance for the design of this type of LWAs.",
"title": ""
},
{
"docid": "49a66c642e8804122e0200429de21c45",
"text": "As a type of Ehlers-Danlos syndrome (EDS), vascular EDs (vEDS) is typified by a number of characteristic facial features (eg, large eyes, small chin, sunken cheeks, thin nose and lips, lobeless ears). However, vEDs does not typically display hypermobility of the large joints and skin hyperextensibility, which are features typical of the more common forms of EDS. Thus, colonic perforation or aneurysm rupture may be the first presentation of the disease. Because both complications are associated with a reduced life expectancy for individuals with this condition, an awareness of the clinical features of vEDS is important. Here, we describe the treatment of vEDS lacking the characteristic facial attributes in a 24-year-old healthy man who presented to the emergency room with abdominal pain. Enhanced computed tomography revealed diverticula and perforation in the sigmoid colon. The lesion of the sigmoid colon perforation was removed, and Hartmann procedure was performed. During the surgery, the control of bleeding was required because of vascular fragility. Subsequent molecular and genetic analysis was performed based on the suspected diagnosis of vEDS. These analyses revealed reduced type III collagen synthesis in cultured skin fibroblasts and identified a previously undocumented mutation in the gene for a1 type III collagen, confirming the diagnosis of vEDS. After eliciting a detailed medical profile, we learned his mother had a history of extensive bruising since childhood and idiopathic hematothorax. Both were prescribed oral celiprolol. One year after admission, the patient was free of recurrent perforation. This case illustrates an awareness of the clinical characteristics of vEDS and the family history is important because of the high mortality from this condition even in young people. Importantly, genetic assays could help in determining the surgical procedure and offer benefits to relatives since this condition is inherited in an autosomal dominant manner.",
"title": ""
},
{
"docid": "cb2309b5290572cf7211f69cac7b99e8",
"text": "Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking",
"title": ""
},
{
"docid": "ed46f9225b60c5f128257310cd1b27ed",
"text": "We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions. A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
},
{
"docid": "87b1018329da8bbad92165087b761c14",
"text": "Code-carrier smoothing is a commonly used method in Differential GPS (DGPS) systems to mitigate the effects of receiver noise and multipath. The FAA’s Local Area Augmentation System (LAAS) uses this technique to help provide the navigation performance needed for aircraft precision approach and landing. However, unless the reference and user smoothing filter implementations are carefully matched, divergence between the code and carrier ranging measurements will cause differential ranging errors. The FAA’s LAAS Ground Facility (LGF) reference station will implement a prescribed first-order Linear Time Invariant (LTI) filter. Yet flexibility must be provided to avionics manufacturers in their airborne filter implementations. While the LGF LTI filter is one possible means for airborne use, its relatively slow transient response (acceptable for a ground based receiver) is not ideal at the aircraft because of frequent filter resets following losses of low elevation satellite signals (caused by aircraft attitude motion). However, in the presence of a code-carrier divergence (CCD) anomaly at the GPS satellite, large divergence rates are theoretically possible, and therefore protection must be provided by the LGF through direct monitoring for such events. In response, this paper addresses the impact of the CCD threat to LAAS differential ranging error and defines an LGF monitor to ensure navigation integrity. Differential ranging errors resulting from unmatched filter designs and different ground/air filter start times are analyzed in detail, and the requirements for the LGF CCD monitor are derived. A CCD integrity monitor algorithm is then developed to directly estimate and detect anomalous divergence rates. The monitor algorithm is implemented and successfully tested using archived field data from the LAAS Test Prototype (LTP) at the William J. Hughes FAA Technical Center. Finally, the paper provides recommendations for initial monitor implementation and future work.",
"title": ""
},
{
"docid": "f5ef7795ec28c8de19bfde30a2499350",
"text": "DevOps and continuous development are getting popular in the software industry. Adopting these modern approaches in regulatory environments, such as medical device software, is not straightforward because of the demand for regulatory compliance. While DevOps relies on continuous deployment and integration, regulated environments require strict audits and approvals before releases. Therefore, the use of modern development approaches in regulatory environments is rare, as is the research on the topic. However, as software is more and more predominant in medical devices, modern software development approaches become attractive. This paper discusses the fit of DevOps for regulated medical device software development. We examine two related standards, IEC 62304 and IEC 82304-1, for obstacles and benefits of using DevOps for medical device software development. We found these standards to set obstacles for continuous delivery and integration. Respectively, development tools can help fulfilling the requirements of traceability and documentation of these standards.",
"title": ""
},
{
"docid": "5ab666919ee1f0e345c3ba16e53e8a45",
"text": "The electron cyclotron maser (ECM) is based on a stimulated cyclotron emission process involving energetic electrons in gyrational motion. It constitutes a cornerstone of relativistic electronics, a discipline that has emerged from our understanding and utilization of relativistic effects for the generation of coherent radiation from free electrons. Over a span of four decades, the ECM has undergone a remarkably successful evolution from basic research to device implementation while continuously being enriched by new physical insights. By delivering unprecedented power levels, ECM-based devices have occupied a unique position in the millimeter and submillimeter regions of the electromagnetic spectrum, and find use in numerous applications such as fusion plasma heating, advanced radars, industrial processing, materials characterization, particle acceleration, and tracking of space objects. This article presents a comprehensive review of the fundamental principles of the ECM and their embodiment in practical devices.",
"title": ""
},
{
"docid": "ed1d91a70bd8865eb364c073b999547b",
"text": "Malware programs (e.g., viruses, worms, Trojans, etc.) are a worldwide epidemic. Studies and statistics show that the impact of malware is getting worse. Malware detectors are the primary tools in the defence against malware. Most commercial anti-malware scanners maintain a database of malware patterns and heuristic signatures for detecting malicious programs within a computer system. Malware writers use semantic-preserving code transformation (obfuscation) techniques to produce new stealth variants of their malware programs. Malware variants are hard to detect with today’s detection technologies as these tools rely mostly on syntactic properties and ignore the semantics of malicious executable programs. A robust malware detection technique is required to handle this emerging security",
"title": ""
},
{
"docid": "111743197c23aff0fac0699a30edca23",
"text": "Origami describes rules for creating folded structures from patterns on a flat sheet, but does not prescribe how patterns can be designed to fit target shapes. Here, starting from the simplest periodic origami pattern that yields one-degree-of-freedom collapsible structures-we show that scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature. Paper models confirm the feasibility of our calculations. We also assess the difficulty of realizing these geometric structures by quantifying the energetic barrier that separates the metastable flat and folded states. Moreover, we characterize the trade-off between the accuracy to which the pattern conforms to the target surface, and the effort associated with creating finer folds. Our approach enables the tailoring of origami patterns to drape complex surfaces independent of absolute scale, as well as the quantification of the energetic and material cost of doing so.",
"title": ""
},
{
"docid": "291a1927343797d72f50134b97f73d88",
"text": "This paper proposes a half-rate single-loop reference-less binary CDR that operates from 8.5 Gb/s to 12.1 Gb/s (36% capture range). The high capture range is made possible by adding a novel frequency detection mechanism which limits the magnitude of the phase error between the input data and the VCO clock. The proposed frequency detector produces three phases of the data, and feeds into the phase detector the data phase that minimizes the CDR phase error. This frequency detector, implemented within a 10 Gb/s CDR in Fujitsu's 65 nm CMOS, consumes 11 mW and improves the capture range by up to 6 × when it is activated.",
"title": ""
},
{
"docid": "6f2d1e0d1d4d6eac9594703f9f5454ee",
"text": "People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.",
"title": ""
},
{
"docid": "e9499206f1952f1bcbd2d7bedad1b3f8",
"text": "The Internet of Things (IoT) enables a wide range of application scenarios with potentially critical actuating and sensing tasks, e.g., in the e-health domain. For communication at the application layer, resource-constrained devices are expected to employ the constrained application protocol (CoAP) that is currently being standardized at the Internet Engineering Task Force. To protect the transmission of sensitive information, secure CoAP mandates the use of datagram transport layer security (DTLS) as the underlying security protocol for authenticated and confidential communication. DTLS, however, was originally designed for comparably powerful devices that are interconnected via reliable, high-bandwidth links. In this paper, we present Lithe-an integration of DTLS and CoAP for the IoT. With Lithe, we additionally propose a novel DTLS header compression scheme that aims to significantly reduce the energy consumption by leveraging the 6LoWPAN standard. Most importantly, our proposed DTLS header compression scheme does not compromise the end-to-end security properties provided by DTLS. Simultaneously, it considerably reduces the number of transmitted bytes while maintaining DTLS standard compliance. We evaluate our approach based on a DTLS implementation for the Contiki operating system. Our evaluation results show significant gains in terms of packet size, energy consumption, processing time, and network-wide response times when compressed DTLS is enabled.",
"title": ""
},
{
"docid": "29dcdc7c19515caad04c6fb58e7de4ea",
"text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.",
"title": ""
},
{
"docid": "97b9d8dd21dfbb68cf72ad2f03b1a98a",
"text": "The explosive increase and ubiquitous accessibility of visual data on the Web have led to the prosperity of research activity in image search or retrieval. With the ignorance of visual content as a ranking clue, methods with text search techniques for visual retrieval may suffer inconsistency between the text words and visual content. Content-based image retrieval (CBIR), which makes use of the representation of visual content to identify relevant images, has attracted sustained attention in recent two decades. Such a problem is challenging due to the intention gap and the semantic gap problems. Numerous techniques have been developed for content-based image retrieval in the last decade. The purpose of this paper is to categorize and evaluate those algorithms proposed during the period of 2003 to 2016. We conclude with several promising directions for future research.",
"title": ""
},
{
"docid": "9da3fc0b3f0c41ad46412caa325e950b",
"text": "Institutional theory has proven to be a central analytical perspective for investigating the role of larger social and historical structures of Information System (IS) adaptation. However, it does not explicitly account for how organizational actors make sense of and enact IS in their local context. We address this limitation by showing how sensemaking theory can be combined with institutional theory to understand IS adaptation in organizations. Based on a literature review, we present the main assumptions behind institutional and sensemaking theory when used as analytical lenses for investigating the phenomenon of IS adaptation. Furthermore, we explore a combination of the two theories with a case study in a health care setting where an Electronic Patient Record (EPR) system was introduced and used by a group of doctors. The empirical case provides evidence of how existing institutional structures influenced the doctors’ sensemaking of the EPR system. Additionally, it illustrates how the doctors made sense of the EPR system in practice. The paper outlines that: 1) institutional theory has its explanatory power at the organizational field and organizational/group level of analysis focusing on the role that larger institutional structures play in organizational actors’ sensemaking of IS adaptation, 2) sensemaking theory has its explanatory power at the organizational/group and individual/socio-cognitive level focusing on organizational actors’ cognition and situated actions of IS adaptation, and 3) a combined view of the two theories helps us oscillate between levels of analysis, which facilitates a much richer interpretation of IS adaptation.",
"title": ""
},
{
"docid": "8c086dec1e59a2f0b81d6ce74e92eae7",
"text": "A necessary attribute of a mobile robot planning algorithm is the ability to accurately predict the consequences of robot actions to make informed decisions about where and how to drive. It is also important that such methods are efficient, as onboard computational resources are typically limited and fast planning rates are often required. In this article, we present several practical mobile robot motion planning algorithms for local and global search, developed with a common underlying trajectory generation framework for use in model-predictive control. These techniques all center on the idea of generating informed, feasible graphs at scales and resolutions that respect computational and temporal constraints of the application. Connectivity in these graphs is provided by a trajectory generator that searches in a parameterized space of robot inputs subject to an arbitrary predictive motion model. Local search graphs connect the currently observed state-to-states at or near the planning or perception horizon. Global search graphs repeatedly expand a precomputed trajectory library in a uniformly distributed state lattice to form a recombinant search space that respects differential constraints. In this article, we discuss the trajectory generation algorithm, methods for online or offline calibration of predictive motion models, sampling strategies for local search graphs that exploit global guidance and environmental information for real-time obstacle avoidance and navigation, and methods for efficient design of global search graphs with attention to optimality, feasibility, and computational complexity of heuristic search. The model-invariant nature of our approach to local and global motions planning has enabled a rapid and successful application of these techniques to a variety of platforms. Throughout the article, we also review experiments performed on planetary rovers, field robots, mobile manipulators, and autonomous automobiles and discuss future directions of the article.",
"title": ""
},
{
"docid": "0431544c8184b0b7bacca20a33df2fe9",
"text": "Automatic drum transcription is the process of generating symbolic notation for percussion instruments within audio recordings. To date, recurrent neural network (RNN) systems have achieved the highest evaluation accuracies for both drum solo and polyphonic recordings, however the accuracies within a polyphonic context still remain relatively low. To improve accuracy for polyphonic recordings, we present two approaches to the ADT problem: First, to capture the dynamism of features in multiple time-step hidden layers, we propose the use of soft attention mechanisms (SA) and an alternative RNN configuration containing additional peripheral connections (PC). Second, to capture these same trends at the input level, we propose the use of a convolutional neural network (CNN), which uses a larger set of time-step features. In addition, we propose the use of a bidirectional recurrent neural network (BRNN) in the peak-picking stage. The proposed systems are evaluated along with two state-of-the-art ADT systems in five evaluation scenarios, including a newly-proposed evaluation methodology designed to assess the generalisability of ADT systems. The results indicate that all of the newly proposed systems achieve higher accuracies than the stateof-the-art RNN systems for polyphonic recordings and that the additional BRNN peak-picking stage offers slight improvement in certain contexts.",
"title": ""
}
] |
scidocsrr
|
b38823867dfccc34ea52b6018507bd2f
|
Anatomy of the Third-Party Web Tracking Ecosystem
|
[
{
"docid": "103d6713dd613bfe5a768c60d349bb4a",
"text": "Mobile phones and tablets can be considered as the first incarnation of the post-PC era. Their explosive adoption rate has been driven by a number of factors, with the most signifcant influence being applications (apps) and app markets. Individuals and organizations are able to develop and publish apps, and the most popular form of monetization is mobile advertising.\n The mobile advertisement (ad) ecosystem has been the target of prior research, but these works typically focused on a small set of apps or are from a user privacy perspective. In this work we make use of a unique, anonymized data set corresponding to one day of traffic for a major European mobile carrier with more than 3 million subscribers. We further take a principled approach to characterize mobile ad traffic along a number of dimensions, such as overall traffic, frequency, as well as possible implications in terms of energy on a mobile device.\n Our analysis demonstrates a number of inefficiencies in today's ad delivery. We discuss the benefits of well-known techniques, such as pre-fetching and caching, to limit the energy and network signalling overhead caused by current systems. A prototype implementation on Android devices demonstrates an improvement of 50 % in terms of energy consumption for offline ad-sponsored apps while limiting the amount of ad related traffic.",
"title": ""
}
] |
[
{
"docid": "68d6d818596518114dc829bb9ecc570f",
"text": "Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.",
"title": ""
},
{
"docid": "74e44b88e3bb92b1319a0a08afcc2ae7",
"text": "Discriminative learning of the parameters in the naive Bayes model is known to be equivalent to a logistic regression problem. Here we show that the same fact holds for much more general Bayesian network models, as long as the corresponding network structure satisfies a certain graph-theoretic property. The property holds for naive Bayes but also for more complex structures such as tree-augmented naive Bayes (TAN) as well as for mixed diagnostic-discriminative structures. Our results imply that for networks satisfying our property, the conditional likelihood cannot have local maxima so that the global maximum can be found by simple local optimization methods. We also show that if this property does not hold, then in general the conditional likelihood can have local, non-global maxima. We illustrate our theoretical results by empirical experiments with local optimization in a conditional naive Bayes model. Furthermore, we provide a heuristic strategy for pruning the number of parameters and relevant features in such models. For many data sets, we obtain good results with heavily pruned submodels containing many fewer parameters than the original naive Bayes model.",
"title": ""
},
{
"docid": "faa8bb95a4b05bed78dbdfaec1cd147c",
"text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.",
"title": ""
},
{
"docid": "9af5406d0148eea660ae4d838c0beb38",
"text": "We give a detailed description of the embedding phase of the Hopcroft and Tarjan planarity testing algorithm. The embedding phase runs in linear time. An implementation based on this paper can be found in [MMN].",
"title": ""
},
{
"docid": "2c2bdd7dad5f939e5fa27d925b741efd",
"text": "We describe a new approach that improves the training of generative adversarial nets (GANs) for synthesizing diverse images from a text input. Our approach is based on the conditional version of GANs and expands on previous work leveraging an auxiliary task in the discriminator. Our generated images are not limited to certain classes and do not suffer from mode collapse while semantically matching the text input. A key to our training methods is how to form positive and negative training examples with respect to the class label of a given image. Instead of selecting random training examples, we perform negative sampling based on the semantic distance from a positive example in the class. We evaluate our approach using the Oxford-102 flower dataset, adopting the inception score and multi-scale structural similarity index (MS-SSIM) metrics to assess discriminability and diversity of the generated images. The empirical results indicate greater diversity in the generated images, especially when we gradually select more negative training examples closer to a positive example in the semantic space.",
"title": ""
},
{
"docid": "3c778c71f621b2c887dc81e7a919058e",
"text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.",
"title": ""
},
{
"docid": "c0f11031f78044075e6e798f8f10e43f",
"text": "We investigate the problem of personalized reviewbased rating prediction which aims at predicting users’ ratings for items that they have not evaluated by using their historical reviews and ratings. Most of existing methods solve this problem by integrating topic model and latent factor model to learn interpretable user and items factors. However, these methods cannot utilize word local context information of reviews. Moreover, it simply restricts user and item representations equivalent to their review representations, which may bring some irrelevant information in review text and harm the accuracy of rating prediction. In this paper, we propose a novel Collaborative Multi-Level Embedding (CMLE) model to address these limitations. The main technical contribution of CMLE is to integrate word embedding model with standard matrix factorization model through a projection level. This allows CMLE to inherit the ability of capturing word local context information from word embedding model and relax the strict equivalence requirement by projecting review embedding to user and item embeddings. A joint optimization problem is formulated and solved through an efficient stochastic gradient ascent algorithm. Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well.",
"title": ""
},
{
"docid": "f61ea212d71eebf43fd677016ce9770a",
"text": "Learning to drive faithfully in highly stochastic urban settings remains an open problem. To that end, we propose a Multi-task Learning from Demonstration (MTLfD) framework which uses supervised auxiliary task prediction to guide the main task of predicting the driving commands. Our framework involves an end-to-end trainable network for imitating the expert demonstrator’s driving commands. The network intermediately predicts visual affordances and action primitives through direct supervision which provide the aforementioned auxiliary supervised guidance. We demonstrate that such joint learning and supervised guidance facilitates hierarchical task decomposition, assisting the agent to learn faster, achieve better driving performance and increases transparency of the otherwise black-box end-to-end network. We run our experiments to validate the MT-LfD framework in CARLA, an open-source urban driving simulator. We introduce multiple non-player agents in CARLA and induce temporal noise in them for realistic stochasticity.",
"title": ""
},
{
"docid": "fff85feeef18f7fa99819711e47e2d39",
"text": "This paper presents a robotic vehicle that can be operated by the voice commands given from the user. Here, we use the speech recognition system for giving &processing voice commands. The speech recognition system use an I.C called HM2007, which can store and recognize up to 20 voice commands. The R.F transmitter and receiver are used here, for the wireless transmission purpose. The micro controller used is AT89S52, to give the instructions to the robot for its operation. This robotic car can be able to avoid vehicle collision , obstacle collision and it is very secure and more accurate. Physically disabled persons can use these robotic cars and they can be used in many industries and for many applications Keywords—SpeechRecognitionSystem,AT89S52 micro controller, R. F. Transmitter and Receiver.",
"title": ""
},
{
"docid": "7df95a3da7a000dd72547c99480940b4",
"text": "What is it like to have a body? The present study takes a psychometric approach to this question. We collected structured introspective reports of the rubber hand illusion, to systematically investigate the structure of bodily self-consciousness. Participants observed a rubber hand that was stroked either synchronously or asynchronously with their own hand and then made proprioceptive judgments of the location of their own hand and used Likert scales to rate their agreement or disagreement with 27 statements relating to their subjective experience of the illusion. Principal components analysis of this data revealed four major components of the experience across conditions, which we interpret as: embodiment of rubber hand, loss of own hand, movement, and affect. In the asynchronous condition, an additional fifth component, deafference, was found. Secondary analysis of the embodiment of runner hand component revealed three subcomponents in both conditions: ownership, location, and agency. The ownership and location components were independent significant predictors of proprioceptive biases induced by the illusion. These results suggest that psychometric tools may provide a rich method for studying the structure of conscious experience, and point the way towards an empirically rigorous phenomenology.",
"title": ""
},
{
"docid": "57a48d8c45b7ed6bbcde11586140f8b6",
"text": "We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.",
"title": ""
},
{
"docid": "ca9f1a955ad033e43d25533d37f50b88",
"text": "Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.",
"title": ""
},
{
"docid": "9dab240226eee04ae78dc3e2b98cd00d",
"text": "The use of whole plants for the synthesis of recombinant proteins has received a great deal of attention recently because of advantages in economy, scalability and safety compared with traditional microbial and mammalian production systems. However, production systems that use whole plants lack several of the intrinsic benefits of cultured cells, including the precise control over growth conditions, batch-to-batch product consistency, a high level of containment and the ability to produce recombinant proteins in compliance with good manufacturing practice. Plant cell cultures combine the merits of whole-plant systems with those of microbial and animal cell cultures, and already have an established track record for the production of valuable therapeutic secondary metabolites. Although no recombinant proteins have yet been produced commercially using plant cell cultures, there have been many proof-of-principle studies and several companies are investigating the commercial feasibility of such production systems.",
"title": ""
},
{
"docid": "9b1643284b783f2947be11f16ae8d942",
"text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.",
"title": ""
},
{
"docid": "5e0921d158f0fa7b299fffba52f724d5",
"text": "Space syntax derives from a set of analytic measures of configuration that have been shown to correlate well with how people move through and use buildings and urban environments. Space syntax represents the open space of an environment in terms of the intervisibility of points in space. The measures are thus purely configurational, and take no account of attractors, nor do they make any assumptions about origins and destinations or path planning. Space syntax has found that, despite many proposed higher-level cognitive models, there appears to be a fundamental process that informs human and social usage of an environment. In this paper we describe an exosomatic visual architecture, based on space syntax visibility graphs, giving many agents simultaneous access to the same pre-processed information about the configuration of a space layout. Results of experiments in a simulated retail environment show that a surprisingly simple ‘random next step’ based rule outperforms a more complex ‘destination based’ rule in reproducing observed human movement behaviour. We conclude that the effects of spatial configuration on movement patterns that space syntax studies have found are consistent with a model of individual decision behaviour based on the spatial affordances offered by the morphology of the local visual field.",
"title": ""
},
{
"docid": "c1f907a8dc5308e07df76c69fd0deb45",
"text": "Emotion regulation has been conceptualized as a process by which individuals modify their emotional experiences, expressions, and physiology and the situations eliciting such emotions in order to produce appropriate responses to the ever-changing demands posed by the environment. Thus, context plays a central role in emotion regulation. This is particularly relevant to the work on emotion regulation in psychopathology, because psychological disorders are characterized by rigid responses to the environment. However, this recognition of the importance of context has appeared primarily in the theoretical realm, with the empirical work lagging behind. In this review, the author proposes an approach to systematically evaluate the contextual factors shaping emotion regulation. Such an approach consists of specifying the components that characterize emotion regulation and then systematically evaluating deviations within each of these components and their underlying dimensions. Initial guidelines for how to combine such dimensions and components in order to capture substantial and meaningful contextual influences are presented. This approach is offered to inspire theoretical and empirical work that it is hoped will result in the development of a more nuanced and sophisticated understanding of the relationship between context and emotion regulation.",
"title": ""
},
{
"docid": "7bdc8d864e370f96475dc7d5078b053c",
"text": "Nowadays, there is a trend to design complex, yet secure systems. In this context, the Trusted Execution Environment (TEE) was designed to enrich the previously defined trusted platforms. TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the system. However, TEE still lacks a precise definition as well as representative building blocks that systematize its design. Existing definitions of TEE are largely inconsistent and unspecific, which leads to confusion in the use of the term and its differentiation from related concepts, such as secure execution environment (SEE). In this paper, we propose a precise definition of TEE and analyze its core properties. Furthermore, we discuss important concepts related to TEE, such as trust and formal verification. We give a short survey on the existing academic and industrial ARM TrustZone-based TEE, and compare them using our proposed definition. Finally, we discuss some known attacks on deployed TEE as well as its wide use to guarantee security in diverse applications.",
"title": ""
},
{
"docid": "425eea5a508dcdd63e0e1ea8e6527a3d",
"text": "This technical report describes the multi-label classification (MLC) search space in the MEKA software, including the traditional/meta MLC algorithms, and the traditional/meta/preprocessing single-label classification (SLC) algorithms. The SLC search space is also studied because is part of MLC search space as several methods use problem transformation methods to create a solution (i.e., a classifier) for a MLC problem. This was done in order to understand better the MLC algorithms. Finally, we propose a grammar that formally expresses this understatement.",
"title": ""
},
{
"docid": "d9b7636d566d82f9714272f1c9f83f2f",
"text": "OBJECTIVE\nFew studies have investigated the association between religion and suicide either in terms of Durkheim's social integration hypothesis or the hypothesis of the regulative benefits of religion. The relationship between religion and suicide attempts has received even less attention.\n\n\nMETHOD\nDepressed inpatients (N=371) who reported belonging to one specific religion or described themselves as having no religious affiliation were compared in terms of their demographic and clinical characteristics.\n\n\nRESULTS\nReligiously unaffiliated subjects had significantly more lifetime suicide attempts and more first-degree relatives who committed suicide than subjects who endorsed a religious affiliation. Unaffiliated subjects were younger, less often married, less often had children, and had less contact with family members. Furthermore, subjects with no religious affiliation perceived fewer reasons for living, particularly fewer moral objections to suicide. In terms of clinical characteristics, religiously unaffiliated subjects had more lifetime impulsivity, aggression, and past substance use disorder. No differences in the level of subjective and objective depression, hopelessness, or stressful life events were found.\n\n\nCONCLUSIONS\nReligious affiliation is associated with less suicidal behavior in depressed inpatients. After other factors were controlled, it was found that greater moral objections to suicide and lower aggression level in religiously affiliated subjects may function as protective factors against suicide attempts. Further study about the influence of religious affiliation on aggressive behavior and how moral objections can reduce the probability of acting on suicidal thoughts may offer new therapeutic strategies in suicide prevention.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.