query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
123dba698635c6a7aeb0ff2cb37a1c42
|
Multistage Object Detection With Group Recursive Learning
|
[
{
"docid": "daa7773486701deab7b0c69e1205a1d9",
"text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.",
"title": ""
}
] |
[
{
"docid": "5e5abd66204a258d8b2be9e36c5ecf83",
"text": "The cultural diversity of culinary practice, as illustrated by the variety of regional cuisines, raises the question of whether there are any general patterns that determine the ingredient combinations used in food today or principles that transcend individual tastes and recipes. We introduce a flavor network that captures the flavor compounds shared by culinary ingredients. Western cuisines show a tendency to use ingredient pairs that share many flavor compounds, supporting the so-called food pairing hypothesis. By contrast, East Asian cuisines tend to avoid compound sharing ingredients. Given the increasing availability of information on food preparation, our data-driven investigation opens new avenues towards a systematic understanding of culinary practice.",
"title": ""
},
{
"docid": "2267c2b85d94bc850a0f5e289db7c079",
"text": "An over-current protection circuit for improving the integrated regulators' latch-up effect is proposed. Latch-up effect can be eliminated through combining foldback function with constant current limiting function which could change the slope of the foldback current curve to avoid intersecting the static load line. With normal 0.6 mum CMOS process, the feasibility and reliability of the circuit was proved",
"title": ""
},
{
"docid": "64bd2fc0d1b41574046340833144dabe",
"text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "db4784e051b798dfa6c3efa5e84c4d00",
"text": "Purpose – The purpose of this paper is to propose and verify that the technology acceptance model (TAM) can be employed to explain and predict the acceptance of mobile learning (M-learning); an activity in which users access learning material with their mobile devices. The study identifies two factors that account for individual differences, i.e. perceived enjoyment (PE) and perceived mobility value (PMV), to enhance the explanatory power of the model. Design/methodology/approach – An online survey was conducted to collect data. A total of 313 undergraduate and graduate students in two Taiwan universities answered the questionnaire. Most of the constructs in the model were measured using existing scales, while some measurement items were created specifically for this research. Structural equation modeling was employed to examine the fit of the data with the model by using the LISREL software. Findings – The results of the data analysis shows that the data fit the extended TAM model well. Consumers hold positive attitudes for M-learning, viewing M-learning as an efficient tool. Specifically, the results show that individual differences have a great impact on user acceptance and that the perceived enjoyment and perceived mobility can predict user intentions of using M-learning. Originality/value – There is scant research available in the literature on user acceptance of M-learning from a customer’s perspective. The present research shows that TAM can predict user acceptance of this new technology. Perceived enjoyment and perceived mobility value are antecedents of user acceptance. The model enhances our understanding of consumer motivation of using M-learning. This understanding can aid our efforts when promoting M-learning.",
"title": ""
},
{
"docid": "ffaa8edb1fccf68e6b7c066fb994510a",
"text": "A fast and precise determination of the DOA (direction of arrival) for immediate object classification becomes increasingly important for future automotive radar generations. Hereby, the elevation angle of an object is considered as a key parameter especially in complex urban environments. An antenna concept allowing the determination of object angles in azimuth and elevation is proposed and discussed in this contribution. This antenna concept consisting of a linear patch array and a cylindrical dielectric lens is implemented into a radar sensor and characterized in terms of angular accuracy and ambiguities using correlation algorithms and the CRLB (Cramer Rao Lower Bound).",
"title": ""
},
{
"docid": "bb128a330bfb654dab0c06269b91d68a",
"text": "Most Chinese texts are inputted with keyboard via two input methods: Pinyin and Wubi, especially by Pinyin input method. In this paper, this users' habitation is used to find the spelling errors automatically. We first train a Chinese character form n-gram language model on a large scale Chinese corpus in the traditional way. In order to improve this character based model, we transform the whole corpus into Pinyin to obtain Pinyin based language model. Fatherly, the tone is considered to get the third model. Integrating these three models, we improve the performance of checking spelling error system. Experimental results demonstrate the effeteness of our model.",
"title": ""
},
{
"docid": "b3db9ba5bd1a6c467f2cf526072641f3",
"text": "This paper describes the design and analysis of a Log-Periodic Microstrip Antenna Array operating between 3.3 Gigahertz (GHz) and 4.5 GHz. A five square patches fed by inset feed line technique are connected with a single transmission line by a log-periodic array formation. By applying five PIN Diodes at the transmission line with a quarter-wave length radial stub biasing, four different sub-band frequencies are configured by switching ON and OFF the PIN Diode. Simulation as well as measurement results with antenna design is presented and it shows that a good agreement in term of return loss. The simulated radiation pattern and realized gain for every sub bands also presented and discussed.",
"title": ""
},
{
"docid": "28b0882f5172aeba01bce14bb1a78782",
"text": "Until recently, industrial control systems (ICSs) used “air-gap” security measures, where every node of the ICS network was isolated from other networks, including the Internet, by a physical disconnect. Attaching ICS networks to the Internet benefits companies and engineers who use them. However, as these systems were designed for use in the air-gapped security environment, protocols used by ICSs contain little to no security features and are vulnerable to various attacks. This paper proposes an approach to detect the intrusions into network attached ICSs by measuring and verifying data that is transmitted through the network but is not inherently the data used by the transmission protocol-network telemetry. Using simulated PLC units, the developed IDS was able to achieve 94.3 percent accuracy when differentiating between machines of an attacker and engineer on the same network, and 99.5 percent accuracy when differentiating between attacker and engineer on the Internet.",
"title": ""
},
{
"docid": "666137f1b598a25269357d6926c0b421",
"text": "representation techniques. T he World Wide Web is possible because a set of widely established standards guarantees interoperability at various levels. Until now, the Web has been designed for direct human processing, but the next-generation Web, which Tim Berners-Lee and others call the “Semantic Web,” aims at machine-processible information.1 The Semantic Web will enable intelligent services—such as information brokers, search agents, and information filters—which offer greater functionality and interoperability than current stand-alone services. The Semantic Web will only be possible once further levels of interoperability have been established. Standards must be defined not only for the syntactic form of documents, but also for their semantic content. Notable among recent W3C standardization efforts are XML/XML schema and RDF/RDF schema, which facilitate semantic interoperability. In this article, we explain the role of ontologies in the architecture of the Semantic Web. We then briefly summarize key elements of XML and RDF, showing why using XML as a tool for semantic interoperability will be ineffective in the long run. We argue that a further representation and inference layer is needed on top of the Web’s current layers, and to establish such a layer, we propose a general method for encoding ontology representation languages into RDF/RDF schema. We illustrate the extension method by applying it to Ontology Interchange Language (OIL), an ontology representation and inference language.2",
"title": ""
},
{
"docid": "860d39ff0ddd80caaf712e84a82f4d86",
"text": "Steganography and steganalysis received a great deal of attention from media and law enforcement. Many powerful and robust methods of steganography and steganalysis have been developed. In this paper we are considering the methods of steganalysis that are to be used for this processes. Paper giving some idea about the steganalysis and its method. Keywords— Include at least 5 keywords or phrases",
"title": ""
},
{
"docid": "687414897eabd32ebbbca6ae792d7148",
"text": "When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.",
"title": ""
},
{
"docid": "4c1e240af3543473e6f08beda06f8245",
"text": "As the worlds of commerce, entertainment, travel, and Internet technology become more inextricably linked, new types of business data become available for creative use and formal analysis. Indeed, this paper provides a study of exploiting online travel information for personalized travel package recommendation. A critical challenge along this line is to address the unique characteristics of travel data, which distinguish travel packages from traditional items for recommendation. To this end, we first analyze the characteristics of the travel packages and develop a Tourist-Area-Season Topic (TAST) model, which can extract the topics conditioned on both the tourists and the intrinsic features (i.e. locations, travel seasons) of the landscapes. Based on this TAST model, we propose a cocktail approach on personalized travel package recommendation. Finally, we evaluate the TAST model and the cocktail approach on real-world travel package data. The experimental results show that the TAST model can effectively capture the unique characteristics of the travel data and the cocktail approach is thus much more effective than traditional recommendation methods for travel package recommendation.",
"title": ""
},
{
"docid": "316c106ae8830dcf8a3cf64775f56ebe",
"text": "Friendship is the cornerstone to build a social network. In online social networks, statistics show that the leading reason for user to create a new friendship is due to recommendation. Thus the accuracy of recommendation matters. In this paper, we propose a Bayesian Personalized Ranking Deep Neural Network (BayDNN) model for friend recommendation in social networks. With BayDNN, we achieve significant improvement on two public datasets: Epinions and Slashdot. For example, on Epinions dataset, BayDNN significantly outperforms the state-of-the-art algorithms, with a 5% improvement on NDCG over the best baseline.\n The advantages of the proposed BayDNN mainly come from its underlying convolutional neural network (CNN), which offers a mechanism to extract latent deep structural feature representations of the complicated network data, and a novel Bayesian personalized ranking idea, which precisely captures the users' personal bias based on the extracted deep features. To get good parameter estimation for the neural network, we present a fine-tuned pre-training strategy for the proposed BayDNN model based on Poisson and Bernoulli probabilistic models.",
"title": ""
},
{
"docid": "9a4cf33f429bd376be787feaa2881610",
"text": "By adopting a cultural transformation in its employees' approach to work and using manufacturing based continuous quality improvement methods, the surgical pathology division of Henry Ford Hospital, Detroit, MI, focused on reducing commonly encountered defects and waste in processes throughout the testing cycle. At inception, the baseline in-process defect rate was measured at nearly 1 in 3 cases (27.9%). After the year-long efforts of 77 workers implementing more than 100 process improvements, the number of cases with defects was reduced by 55% to 1 in 8 cases (12.5%), with a statistically significant reduction in the overall distribution of defects (P = .0004). Comparison with defects encountered in the pre-improvement period showed statistically significant reductions in pre-analytic (P = .0007) and analytic (P = .0002) test phase processes in the post-improvement period that included specimen receipt, specimen accessioning, grossing, histology slides, and slide recuts. We share the key improvements implemented that were responsible for the overall success in reducing waste and re-work in the broad spectrum of surgical pathology processes.",
"title": ""
},
{
"docid": "193f28dd6c2288b82845628296ae30ff",
"text": "Ontologies are widely used in biological and biomedical research. Their success lies in their combination of four main features present in almost all ontologies: provision of standard identifiers for classes and relations that represent the phenomena within a domain; provision of a vocabulary for a domain; provision of metadata that describes the intended meaning of the classes and relations in ontologies; and the provision of machine-readable axioms and definitions that enable computational access to some aspects of the meaning of classes and relations. While each of these features enables applications that facilitate data integration, data access and analysis, a great potential lies in the possibility of combining these four features to support integrative analysis and interpretation of multimodal data. Here, we provide a functional perspective on ontologies in biology and biomedicine, focusing on what ontologies can do and describing how they can be used in support of integrative research. We also outline perspectives for using ontologies in data-driven science, in particular their application in structured data mining and machine learning applications.",
"title": ""
},
{
"docid": "2e2e8219b7870529e8ca17025190aa1b",
"text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.",
"title": ""
},
{
"docid": "4f02e48932129dd77f48f99478c08ab2",
"text": "A low-power low-voltage OTA with rail-to-rail output is introduced. The proposed topology is based on the common current mirror OTA topology and provide gain enhancement without extra power consumption. Implemented in a standard 0.25/spl mu/m CMOS technology, the proposed OTA achieves 50 dB DC gain in 0.8 V supply voltage. The GBW is 1.2MHz and the static power consumption is 8/spl mu/W while driving 18pF load. The class AB operation increases the slew rate and still maintains low static biasing current. This topology is suitable for low-power low-voltage switched-capacitor application.",
"title": ""
},
{
"docid": "2fb484ef6d394e27a3157774048c3917",
"text": "As the demand of high quality service in next generation wireless communication systems increases, a high performance of data transmission requires an increase of spectrum efficiency and an improvement of error performance in wireless communication systems. One of the promising approaches to 4G is adaptive OFDM (AOFDM). In AOFDM, adaptive transmission scheme is employed according to channel fading condition with OFDM to improve the performance Adaptive modulation system is superior to fixed modulation system since it changes modulation scheme according to channel fading condition. Performance of adaptive modulation system depends on decision making logic. Adaptive modulation systems using hardware decision making circuits are inefficient to decide or change modulation scheme according to given conditions. Using fuzzy logic in decision making interface makes the system more efficient. In this paper, we propose a OFDM system with adaptive modulation using fuzzy logic interface to improve system capacity with maintaining good error performance. The results of computer simulation show the improvement of system capacity in Rayleigh fading channel.",
"title": ""
},
{
"docid": "1512f35cd69a456a72f981577cfb068b",
"text": "Recurrence and progression to higher grade lesions are key biological events and characteristic behaviors in the evolution process of glioma. Malignant astrocytic tumors such as glioblastoma (GBM) are the most lethal intracranial tumors. However, the clinical practicability and significance of molecular parameters for the diagnostic and prognostic prediction of astrocytic tumors is still limited. In this study, we detected ATRX, IDH1-R132H and Ki-67 by immunohistochemistry and observed the association of IDH1-R132H with ATRX and Ki-67 expression. There was a strong association between ATRX loss and IDH1-R132H (p<0.0001). However, Ki-67 high expression restricted in the tumors with IDH1-R132H negative (p=0.0129). Patients with IDH1-R132H positive or ATRX loss astrocytic tumors had a longer progressive- free survival (p<0.0001, p=0.0044, respectively). High Ki-67 expression was associated with shorter PFS in patients with astrocytic tumors (p=0.002). Then we characterized three prognostic subgroups of astrocytic tumors (referred to as A1, A2 and A3). The new model demonstrated a remarkable separation of the progression interval in the three molecular subgroups and the distribution of patients' age in the A1-A2-A3 model was also significant different. This model will aid predicting the overall survival and progressive time of astrocytic tumors' patients.",
"title": ""
}
] |
scidocsrr
|
9b167a706641cdae0822a1b85a54e8a4
|
An evolutionary theory of human motivation.
|
[
{
"docid": "c72eca59514adc8b9afeda5f23f8a7ea",
"text": "Conscious feelings have traditionally been viewed as a central and necessary ingredient of emotion. Here we argue that emotion also can be genuinely unconscious. We describe evidence that positive and negative reactions can be elicited subliminally and remain inaccessible to introspection. Despite the absence of subjective feelings in such cases, subliminally induced affective reactions still influence people’s preference judgments and even the amount of beverage they consume. This evidence is consistent with evolutionary considerations suggesting that systems underlying basic affective reactions originated prior to systems for conscious awareness. The idea of unconscious emotion is also supported by evidence from affective neuroscience indicating that subcortical brain systems underlie basic ‘‘liking’’ reactions. More research is needed to clarify the relations and differences between conscious and unconscious emotion, and their underlying mechanisms. However, even under the current state of knowledge, it appears that processes underlying conscious feelings can become decoupled from processes underlying emotional reactions, resulting in genuinely unconscious emotion. KEYWORDS—affect; automaticity; consciousness; emotion; neuroscience To say that people are conscious of their own emotions sounds like a truism. After all, emotions are feelings, so how could one have feelings that are not felt? Of course, people sometimes may be mistaken about the cause of their emotion or may not know why they feel a particular emotion, as when they feel anxious for what seems no particular reason. On occasion, people may even incorrectly construe their own emotional state, as when they angrily deny that they are angry. But many psychologists presume that the emotion itself is intrinsically conscious, and that with proper motivation and attention, it can be brought into the full light of awareness. So, at least, goes the traditional view. Our view goes a bit further. We suggest that under some conditions an emotional process may remain entirely unconscious, even when the person is attentive and motivated to describe his or her feelings correctly (Berridge & Winkielman, 2003; Winkielman, Berridge, & Wilbarger, in press). Such an emotional process may nevertheless drive the person’s behavior and physiological reactions, even while remaining inaccessible to conscious awareness. In short, we propose the existence of genuinely unconscious emotions. THE TRADITIONAL VIEW: EMOTION AS A CONSCIOUS",
"title": ""
}
] |
[
{
"docid": "c0fd9b73e2af25591e3c939cdbed1c1a",
"text": "We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https://github.com/hezhangsprinter/DCPDN",
"title": ""
},
{
"docid": "2282c06ea5e203b7e94095334bba05b9",
"text": "Exploring and surveying the world has been an important goal of humankind for thousands of years. Entering the 21st century, the Earth has almost been fully digitally mapped. Widespread deployment of GIS (Geographic Information Systems) technology and a tremendous increase of both satellite and street-level mapping over the last decade enables the public to view large portions of the world using computer applications such as Bing Maps or Google Earth.",
"title": ""
},
{
"docid": "fac1eebdae6719224a6bd01785c72551",
"text": "Tolerance design has become a very sensitive and important issue in product and process development because of increasing demand for quality products and the growing requirements for automation in manufacturing. This chapter presents tolerance stack up analysis of dimensional and geometrical tolerances. The stack up of tolerances is important for functionality of the mechanical assembly as well as optimizing the cost of the system. Many industries are aware of the importance of geometrical dimensioning & Tolerancing (GDT) of their product design. Conventional methods of tolerance stack up analysis are tedious and time consuming. Stack up of geometrical tolerances is usually difficult as it involves application of numerous rules & conditions. This chapter introduces the various approaches viz. Generic Capsule, Quickie and Catena methods, used towards tolerance stack up analysis for geometrical tolerances. Automation of stack up of geometrical tolerances can be used for tolerance allocation on the components as well as their assemblies considering the functionality of the system. Stack of geometrical tolerances has been performed for individual components as well as assembly of these components.",
"title": ""
},
{
"docid": "17106095b19d87ad8883af0606714a07",
"text": "Based on American Customer Satisfaction Index model ACSI and study at home and abroad, a Hotel online booking Consumer Satisfaction model (HECS) is established. After empirically testing the validity of the measurement model and structural model of Hotel online booking Consumer Satisfaction, consumer satisfaction index is calculated. Results show that Website easy usability impacts on customer satisfaction most significantly, followed by responsiveness and reliability of the website. Statistic results also show a medium consumer satisfaction index number. Suggestions are given to improve online booking consumer satisfaction, such as website designing of easier using, timely processing of orders, offering more offline personal support for online service, doing more communication with customers, providing more communication channel and so on.",
"title": ""
},
{
"docid": "7d26c09bf274ae41f19a6aafc6a43d18",
"text": "Converging findings of animal and human studies provide compelling evidence that the amygdala is critically involved in enabling us to acquire and retain lasting memories of emotional experiences. This review focuses primarily on the findings of research investigating the role of the amygdala in modulating the consolidation of long-term memories. Considerable evidence from animal studies investigating the effects of posttraining systemic or intra-amygdala infusions of hormones and drugs, as well as selective lesions of specific amygdala nuclei, indicates that (a) the amygdala mediates the memory-modulating effects of adrenal stress hormones and several classes of neurotransmitters; (b) the effects are selectively mediated by the basolateral complex of the amygdala (BLA); (c) the influences involve interactions of several neuromodulatory systems within the BLA that converge in influencing noradrenergic and muscarinic cholinergic activation; (d) the BLA modulates memory consolidation via efferents to other brain regions, including the caudate nucleus, nucleus accumbens, and cortex; and (e) the BLA modulates the consolidation of memory of many different kinds of information. The findings of human brain imaging studies are consistent with those of animal studies in suggesting that activation of the amygdala influences the consolidation of long-term memory; the degree of activation of the amygdala by emotional arousal during encoding of emotionally arousing material (either pleasant or unpleasant) correlates highly with subsequent recall. The activation of neuromodulatory systems affecting the BLA and its projections to other brain regions involved in processing different kinds of information plays a key role in enabling emotionally significant experiences to be well remembered.",
"title": ""
},
{
"docid": "be7412a48578741d830e267bff0c1c6a",
"text": "In recent years, greater attention has been given to vessels’ seakeeping characteristics. This is due to a number of factors: proliferation of high-speed semi-displacement passenger vessels; increasing demand for passenger comfort (passengers are often able to vote with their feet by taking alternative transport, e.g. English Channel Tunnel); deployment of increasingly sophisticated systems on ever smaller naval vessels (Hunt 1999); greater pressure from regulatory bodies and the broader public for safer vessels; staggering advancements in desktop computer power; and developments in prediction and analysis tools.",
"title": ""
},
{
"docid": "6d4cd80341c429ecaaccc164b1bde5f9",
"text": "One hundred and two olive RAPD profiles were sampled from all around the Mediterranean Basin. Twenty four clusters of RAPD profiles were shown in the dendrogram based on the Ward’s minimum variance algorithm using chi-square distances. Factorial discriminant analyses showed that RAPD profiles were correlated with the use of the fruits and the country or region of origin of the cultivars. This suggests that cultivar selection has occurred in different genetic pools and in different areas. Mitochondrial DNA RFLP analyses were also performed. These mitotypes supported the conclusion also that multilocal olive selection has occurred. This prediction for the use of cultivars will help olive growers to choose new foreign cultivars for testing them before an eventual introduction if they are well adapted to local conditions.",
"title": ""
},
{
"docid": "b829049a8abf47f8f13595ca54eaa009",
"text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.",
"title": ""
},
{
"docid": "ab35dcaf3e240921225b639e8c17f2de",
"text": "Refactorings are widely recognised as ways to improve the internal structure of object-oriented software while maintaining its external behaviour. Unfortunately, refactorings concentrate on the treatment of symptoms (the so called code-smells), thus improvements depend a lot on the skills of the maintained coupling and cohesion on the other hand are quality attributes which are generally recognized as being among the most likely quantifiable indicators for software maintainability. Therefore, this paper analyzes how refactorings manipulate coupling/cohesion characteristics, and how to identify refactoring opportunities that improve these characteristics. As such we provide practical guidelines for the optimal usage of refactoring in a software maintenance process.",
"title": ""
},
{
"docid": "c795c3fbf976c5746c75eb33c622ad21",
"text": "We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems1.",
"title": ""
},
{
"docid": "04fc127c1b6e915060c2f3035aa5067b",
"text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing–emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user’s emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and pshysiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.",
"title": ""
},
{
"docid": "ff32e960fb5ff7b7e0910e6e69421860",
"text": "Abslracl Semantic mapping aims to create maps that include meaningful features, both to robots nnd humans. We prescnt :10 extens ion to our feature based mapping technique that includes information about the locations of horizontl.lJ surfaces such as tables, shelves, or counters in the map. The surfaces a rc detected in 3D point clouds, the locations of which arc optimized by our SLAM algorithm. The resulting scans of surfaces :lrc then analyzed to segment them into distinct surfaces, which may include measurements of a single surface across multiple scans. Preliminary rl'Sults arc presented in the form of a feature based map augmented with a sct of 3D point clouds in a consistent global map frame that represent all detected surfaces within the mapped area.",
"title": ""
},
{
"docid": "763372dc4ebc2cd972a5b851be014bba",
"text": "Parametric piecewise-cubic functions are used throughout the computer graphics industry to represent curved shapes. For many applications, it would be useful to be able to reliably derive this representation from a closely spaced set of points that approximate the desired curve, such as the input from a digitizing tablet or a scanner. This paper presents a solution to the problem of automatically generating efficient piecewise parametric cubic polynomial approximations to shapes from sampled data. We have developed an algorithm that takes a set of sample points, plus optional endpoint and tangent vector specifications, and iteratively derives a single parametric cubic polynomial that lies close to the data points as defined by an error metric based on least-squares. Combining this algorithm with dynamic programming techniques to determine the knot placement gives good results over a range of shapes and applications.",
"title": ""
},
{
"docid": "213862a47773c5ad34aa69b8b0a951d1",
"text": "The next generation wireless networks are expected to operate in fully automated fashion to meet the burgeoning capacity demand and to serve users with superior quality of experience. Mobile wireless networks can leverage spatio-temporal information about user and network condition to embed the system with end-to-end visibility and intelligence. Big data analytics has emerged as a promising approach to unearth meaningful insights and to build artificially intelligent models with assistance of machine learning tools. Utilizing aforementioned tools and techniques, this paper contributes in two ways. First, we utilize mobile network data (Big Data)—call detail record—to analyze anomalous behavior of mobile wireless network. For anomaly detection purposes, we use unsupervised clustering techniques namely k-means clustering and hierarchical clustering. We compare the detected anomalies with ground truth information to verify their correctness. From the comparative analysis, we observe that when the network experiences abruptly high (unusual) traffic demand at any location and time, it identifies that as anomaly. This helps in identifying regions of interest in the network for special action such as resource allocation, fault avoidance solution, etc. Second, we train a neural-network-based prediction model with anomalous and anomaly-free data to highlight the effect of anomalies in data while training/building intelligent models. In this phase, we transform our anomalous data to anomaly-free and we observe that the error in prediction, while training the model with anomaly-free data has largely decreased as compared to the case when the model was trained with anomalous data.",
"title": ""
},
{
"docid": "2e0585860c1fa533412ff1fea76632cb",
"text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.",
"title": ""
},
{
"docid": "8ce3fc72fa132b8baeff35035354d194",
"text": "Raman spectroscopy is a molecular vibrational spectroscopic technique that is capable of optically probing the biomolecular changes associated with diseased transformation. The purpose of this study was to explore near-infrared (NIR) Raman spectroscopy for identifying dysplasia from normal gastric mucosa tissue. A rapid-acquisition dispersive-type NIR Raman system was utilised for tissue Raman spectroscopic measurements at 785 nm laser excitation. A total of 76 gastric tissue samples obtained from 44 patients who underwent endoscopy investigation or gastrectomy operation were used in this study. The histopathological examinations showed that 55 tissue specimens were normal and 21 were dysplasia. Both the empirical approach and multivariate statistical techniques, including principal components analysis (PCA), and linear discriminant analysis (LDA), together with the leave-one-sample-out cross-validation method, were employed to develop effective diagnostic algorithms for classification of Raman spectra between normal and dysplastic gastric tissues. High-quality Raman spectra in the range of 800–1800 cm−1 can be acquired from gastric tissue within 5 s. There are specific spectral differences in Raman spectra between normal and dysplasia tissue, particularly in the spectral ranges of 1200–1500 cm−1 and 1600–1800 cm−1, which contained signals related to amide III and amide I of proteins, CH3CH2 twisting of proteins/nucleic acids, and the C=C stretching mode of phospholipids, respectively. The empirical diagnostic algorithm based on the ratio of the Raman peak intensity at 875 cm−1 to the peak intensity at 1450 cm−1 gave the diagnostic sensitivity of 85.7% and specificity of 80.0%, whereas the diagnostic algorithms based on PCA-LDA yielded the diagnostic sensitivity of 95.2% and specificity 90.9% for separating dysplasia from normal gastric tissue. Receiver operating characteristic (ROC) curves further confirmed that the most effective diagnostic algorithm can be derived from the PCA-LDA technique. Therefore, NIR Raman spectroscopy in conjunction with multivariate statistical technique has potential for rapid diagnosis of dysplasia in the stomach based on the optical evaluation of spectral features of biomolecules.",
"title": ""
},
{
"docid": "5048a090adfdd3ebe9d9253ca4f72644",
"text": "Movement disorders or extrapyramidal symptoms (EPS) associated with selective serotonin reuptake inhibitors (SSRIs) have been reported. Although akathisia was found to be the most common EPS, and fluoxetine was implicated in the majority of the adverse reactions, there were also cases with EPS due to sertraline treatment. We present a child and an adolescent who developed torticollis (cervical dystonia) after using sertraline. To our knowledge, the child case is the first such report of sertraline-induced torticollis, and the adolescent case is the third in the literature.",
"title": ""
},
{
"docid": "df032871f23711e2e7cedf735d8cdca0",
"text": "Received: 8 February 2006 Revised: 31 March 2006 Accepted: 13 April 2006 Online publication date: 22 June 2006 Abstract In this article, Novak's concept mapping technique is compared to three other types of visualization formats, namely mind maps, conceptual diagrams, and visual metaphors. The application parameters and the respective advantages and disadvantages of each format for learning and knowledge sharing are reviewed and discussed. It is argued that the combination of these four visualization types can play to the strength of each one. The article then provides real-life examples from such a use in undergraduate and graduate university teaching. The results provide first indications that the different visualization formats can be used in complementary ways to enhance motivation, attention, understanding and recall. The implications for a complementary use of these visualization formats in class room and meeting contexts are discussed and a future research agenda in this domain is articulated. Information Visualization (2006) 5, 202--210. doi:10.1057/palgrave.ivs.9500131",
"title": ""
},
{
"docid": "5d527ad4493860a8d96283a5c58c3979",
"text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.",
"title": ""
},
{
"docid": "abf3e75c6f714e4c2e2a02f9dd00117b",
"text": "Recent work has shown that collaborative filter-based recommender systems can be improved by incorporating side information, such as natural language reviews, as a way of regularizing the derived product representations. Motivated by the success of this approach, we introduce two different models of reviews and study their effect on collaborative filtering performance. While the previous state-of-the-art approach is based on a latent Dirichlet allocation (LDA) model of reviews, the models we explore are neural network based: a bag-of-words product-of-experts model and a recurrent neural network. We demonstrate that the increased flexibility offered by the product-of-experts model allowed it to achieve state-of-the-art performance on the Amazon review dataset, outperforming the LDA-based approach. However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.",
"title": ""
}
] |
scidocsrr
|
89e963ab42b06a199772363d69798a6b
|
Efficient and reliable low-power backscatter networks
|
[
{
"docid": "315dad78cb444487e76b202c4645fa3c",
"text": "The UMass Moo is a passively powered computational RFID that harvests RFID reader energy from the UHF band, communicates with an RFID reader, and processes data from its onboard sensors. Its function can be extended via its general-purpose I/Os, serial buses, and 12-bit ADC/DAC ports. Based on the Intel DL WISP (revision 4.1), the Moo provides an RFID-scale, reprogrammable, batteryless sensing platform. This report compares the Moo to its ancestor, documents our design decisions, and details the Moo’s compatibility with other devices. It is meant to be a companion document for the open-source release of code and specifications for the Moo (revision 1.x). We made an initial batch of Moo 1.1 hardware available to other researchers in June 2011.",
"title": ""
}
] |
[
{
"docid": "e8216c275a20be6706f5c2792bc6fd92",
"text": "Robust and reliable vehicle detection from images acquired by a moving vehicle is an important problem with numerous applications including driver assistance systems and self-guided vehicles. Our focus in this paper is on improving the performance of on-road vehicle detection by employing a set of Gabor filters specifically optimized for the task of vehicle detection. This is essentially a kind of feature selection, a critical issue when designing any pattern classification system. Specifically, we propose a systematic and general evolutionary Gabor filter optimization (EGFO) approach for optimizing the parameters of a set of Gabor filters in the context of vehicle detection. The objective is to build a set of filters that are capable of responding stronger to features present in vehicles than to nonvehicles, therefore improving class discrimination. The EGFO approach unifies filter design with filter selection by integrating genetic algorithms (GAs) with an incremental clustering approach. Filter design is performed using GAs, a global optimization approach that encodes the Gabor filter parameters in a chromosome and uses genetic operators to optimize them. Filter selection is performed by grouping filters having similar characteristics in the parameter space using an incremental clustering approach. This step eliminates redundant filters, yielding a more compact optimized set of filters. The resulting filters have been evaluated using an application-oriented fitness criterion based on support vector machines. We have tested the proposed framework on real data collected in Dearborn, MI, in summer and fall 2001, using Ford's proprietary low-light camera.",
"title": ""
},
{
"docid": "c158e9421ec0d1265bd625b629e64dc5",
"text": "This paper proposes a gateway framework for in-vehicle networks (IVNs) based on the controller area network (CAN), FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface (GUI). The gateway framework provides state-of-the-art functionalities that include parallel reprogramming, diagnostic routing, network management (NM), dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.",
"title": ""
},
{
"docid": "ca6ae788fc63563e39e1cb611dbdd8c5",
"text": "STATL is an extensible state/transition-based attack desc ription language designed to support intrusion detection. The language allows one to describe computer pen trations as sequences of actions that an attacker performs to compromise a computer system. A STATL descripti on of an attack scenario can be used by an intrusion detection system to analyze a stream of events and de tect possible ongoing intrusions. Since intrusion detection is performed in different domains (i.e., the netw ork or the hosts) and in different operating environments (e.g., Linux, Solaris, or Windows NT), it is useful to h ave an extensible language that can be easily tailored to different target environments. STATL defines do main-independent features of attack scenarios and provides constructs for extending the language to describe attacks in particular domains and environments. The STATL language has been successfully used in describing both network-based and host-based attacks, and it has been tailored to very different environments, e.g ., Sun Microsystems’ Solaris and Microsoft’s Windows NT. An implementation of the runtime support for the STATL language has been developed and a toolset of intrusion detection systems based on STATL has b een implemented. The toolset was used in a recent intrusion detection evaluation effort, delivering very favorable results. This paper presents the details of the STATL syntax and its semantics. Real examples from bot h the host and network-based extensions of the language are also presented.",
"title": ""
},
{
"docid": "33c497748082b3c62fc1b5e8d5ab9d05",
"text": "The prevention and treatment of malaria is heavily dependent on antimalarial drugs. However, beginning with the emergence of chloroquine (CQ)-resistant Plasmodium falciparum parasites 50 years ago, efforts to control the disease have been thwarted by failed or failing drugs. Mutations in the parasite’s ‘chloroquine resistance transporter’ (PfCRT) are the primary cause of CQ resistance. Furthermore, changes in PfCRT (and in several other transport proteins) are associated with decreases or increases in the parasite’s susceptibility to a number of other antimalarial drugs. Here, we review recent advances in our understanding of CQ resistance and discuss these in the broader context of the parasite’s susceptibilities to other quinolines and related drugs. We suggest that PfCRT can be viewed both as a ‘multidrug-resistance carrier’ and as a drug target, and that the quinoline-resistance mechanism is a potential ‘Achilles’ heel’ of the parasite. We examine a number of the antimalarial strategies currently undergoing development that are designed to exploit the resistance mechanism, including relatively simple measures, such as alternative CQ dosages, as well as new drugs that either circumvent the resistance mechanism or target it directly.",
"title": ""
},
{
"docid": "f3030761b1276ad601c1665e288d799d",
"text": "We explore the top-K rank aggregation problem. Suppose a collection of items is compared in pairs repeatedly, and we aim to recover a consistent ordering that focuses on the top-K ranked items based on partially revealed preference information. We investigate the Bradley-Terry-Luce model in which one ranks items according to their perceived utilities modeled as noisy observations of their underlying true utilities. Our main contributions are two-fold. First, in a general comparison model where item pairs to compare are given a priori, we attain an upper and lower bound on the sample size for reliable recovery of the top-K ranked items. Second, more importantly, extending the result to a random comparison model where item pairs to compare are chosen independently with some probability, we show that in slightly restricted regimes, the gap between the derived bounds reduces to a constant factor, hence reveals that a spectral method can achieve the minimax optimality on the (order-wise) sample size required for top-K ranking. That is to say, we demonstrate a spectral method alone to be sufficient to achieve the optimality and advantageous in terms of computational complexity, as it does not require an additional stage of maximum likelihood estimation that a state-of-the-art scheme employs to achieve the optimality. We corroborate our main results by numerical experiments.",
"title": ""
},
{
"docid": "5dd91b5a3a09075fe1852e5fecd277b0",
"text": "Efficient blood flow depends on two developmental processes that occur within the atrioventricular junction (AVJ) of the heart: conduction delay, which entrains sequential chamber contraction; and valve formation, which prevents retrograde fluid movement. Defects in either result in severe congenital heart disease; however, little is known about the interplay between these two crucial developmental processes. Here, we show that AVJ conduction delay is locally assigned by the morphogenetic events that initiate valve formation. Our data demonstrate that physical separation from endocardial-derived factors prevents AVJ myocardium from becoming fast conducting. Mechanistically, this physical separation is induced by myocardial-derived factors that support cardiac jelly deposition at the onset of valve formation. These data offer a novel paradigm for conduction patterning, whereby reciprocal myocardial-endocardial interactions coordinate the processes of valve formation with establishment of conduction delay. This, in turn, synchronizes the electrophysiological and structural events necessary for the optimization of blood flow through the developing heart.",
"title": ""
},
{
"docid": "d83a90a3a080f4e3bce2a68d918d20ce",
"text": "We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications’ data structures. Frequently used data structures have “average-case” expected running time that’s far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.",
"title": ""
},
{
"docid": "a40d11652a42ac6a6bf4368c9665fb3b",
"text": "This paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes. The taxonomy consists of a classification first of the detection principle, and second of certain operational aspects of the intrusion detection system as such. The systems are also grouped according to the increasing difficulty of the problem they attempt to address. These classifications are used predictively, pointing towards a number of areas of future research in the field of intrusion detection.",
"title": ""
},
{
"docid": "71e65d1ae7ff899467cc93b3858992b8",
"text": "This paper describes a semi-automated process, framework and tools for harvesting, assessing, improving and maintaining high-quality linked-data. The framework, known as DaCura1, provides dataset curators, who may not be knowledge engineers, with tools to collect and curate evolving linked data datasets that maintain quality over time. The framework encompasses a novel process, workflow and architecture. A working implementation has been produced and applied firstly to the publication of an existing social-sciences dataset, then to the harvesting and curation of a related dataset from an unstructured data-source. The framework’s performance is evaluated using data quality measures that have been developed to measure existing published datasets. An analysis of the framework against these dimensions demonstrates that it addresses a broad range of real-world data quality concerns. Experimental results quantify the impact of the DaCura process and tools on data quality through an assessment framework and methodology which combines automated and human data quality controls. Improving Curated WebData Quality with Structured Harvesting and Assessment",
"title": ""
},
{
"docid": "9e6cec136607d572331c3915c7295415",
"text": "AIMS AND OBJECTIVES\nThis study evaluated the effects of handholding and spoken information provided on the anxiety of patients undergoing percutaneous vertebroplasty under local anaesthesia.\n\n\nBACKGROUND\nA surgical intervention usually entails physical discomfort and psychological burden. Furthermore, patients under local anaesthesia are conscious during the surgical intervention, which leads to more anxiety, as patients are aware of their surroundings in the operating theatre.\n\n\nDESIGN\nA quasi-experimental design with a nonequivalent control group was utilised.\n\n\nMETHODS\nAmsterdam preoperative anxiety scale assessed psychological anxiety, while blood pressure and pulse were measured to evaluate physiological anxiety. Participants were 94 patients undergoing percutaneous vertebroplasty in a spine hospital in Gwangju Metropolitan City, South Korea. Thirty patients were assigned to Experimental Group I, 34 to the Experimental Group II and 30 to the control group. During a surgical intervention, nurses held the hands of those in Experimental Group I and provided them with spoken information. Patients in Experimental Group II experienced only handholding.\n\n\nRESULTS\nPsychological anxiety in Experimental Group I was low compared to those in Experimental Group II and the control group. In addition, there were significant decreases in systolic blood pressure in both Experimental Groups compared to the control group.\n\n\nCONCLUSIONS\nHandholding and spoken information provided during a surgical intervention to mitigate psychological anxiety, and handholding to mitigate physiological anxiety can be used in nursing interventions with patients undergoing percutaneous vertebroplasty.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nHandholding and providing nursing information are possibly very useful interventions that are easily implemented by circulating nurses during a surgical intervention. In particular, handholding is a simple, economical and appropriate way to help patient in the operating theatre.",
"title": ""
},
{
"docid": "13897df01d4c03191dd015a04c3a5394",
"text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author",
"title": ""
},
{
"docid": "4818e47ceaec70457701649832fb90c4",
"text": "Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that α = α(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize α(&thgr;) over &thgr;. Since α(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.\n In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines α(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures.",
"title": ""
},
{
"docid": "0b8c51f823cb55cbccfae098e98f28b3",
"text": "In this study, we investigate whether the “out of body” vibrotactile illusion known as funneling could be applied to enrich and thereby improve the interaction performance on a tablet-sized media device. First, a series of pilot tests was taken to determine the appropriate operational conditions and parameters (such as the tablet size, holding position, minimal required vibration amplitude, and the effect of matching visual feedback) for a two-dimensional (2D) illusory tactile rendering method. Two main experiments were then conducted to validate the basic applicability and effectiveness of the rendering method, and to further demonstrate how the illusory tactile feedback could be deployed in an interactive application and actually improve user performance. Our results showed that for a tablet-sized device (e.g., iPad mini and iPad), illusory perception was possible (localization performance of up to 85%) using a rectilinear grid with a resolution of 5 $$\\times $$ × 7 (grid size: 2.5 cm) with matching visual feedback. Furthermore, the illusory feedback was found to be a significant factor in improving the user performance in a 2D object search/attention task.",
"title": ""
},
{
"docid": "8d7a41aad86633c9bb7da8adfde71883",
"text": "Nuclear receptors (NRs) are major pharmacological targets that allow an access to the mechanisms controlling gene regulation. As such, some NRs were identified as biological targets of active compounds contained in herbal remedies found in traditional medicines. We aim here to review this expanding literature by focusing on the informative articles regarding the mechanisms of action of traditional Chinese medicines (TCMs). We exemplified well-characterized TCM action mediated by NR such as steroid receptors (ER, GR, AR), metabolic receptors (PPAR, LXR, FXR, PXR, CAR) and RXR. We also provided, when possible, examples from other traditional medicines. From these, we draw a parallel between TCMs and phytoestrogens or endocrine disrupting chemicals also acting via NR. We define common principle of action and highlight the potential and limits of those compounds. TCMs, by finely tuning physiological reactions in positive and negative manners, could act, in a subtle but efficient way, on NR sensors and their transcriptional network.",
"title": ""
},
{
"docid": "60d21d395c472eb36bdfd014c53d918a",
"text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.",
"title": ""
},
{
"docid": "7b9bc654a170d143a64bdae4c421053e",
"text": "Analysis on a developed dynamic model of the dish-Stirling (DS) system shows that maximum solar energy harness can be realized through controlling the Stirling engine speed. Toward this end, a control scheme is proposed for the doubly fed induction generator coupled to the DS system, as a means to achieve maximum power point tracking as the solar insolation level varies. Furthermore, the adopted fuzzy supervisory control technique is shown to be effective in controlling the temperature of the receiver in the DS system as the speed changes. Simulation results and experimental measurements validate the maximum energy harness ability of the proposed variable-speed DS solar-thermal system.",
"title": ""
},
{
"docid": "df29784edea11d395547ca23830f2f62",
"text": "The clinical efficacy of current antidepressant therapies is unsatisfactory; antidepressants induce a variety of unwanted effects, and, moreover, their therapeutic mechanism is not clearly understood. Thus, a search for better and safer agents is continuously in progress. Recently, studies have demonstrated that zinc and magnesium possess antidepressant properties. Zinc and magnesium exhibit antidepressant-like activity in a variety of tests and models in laboratory animals. They are active in forced swim and tail suspension tests in mice and rats, and, furthermore, they enhance the activity of conventional antidepressants (e.g., imipramine and citalopram). Zinc demonstrates activity in the olfactory bulbectomy, chronic mild and chronic unpredictable stress models in rats, while magnesium is active in stress-induced depression-like behavior in mice. Clinical studies demonstrate that the efficacy of pharmacotherapy is enhanced by supplementation with zinc and magnesium. The antidepressant mechanisms of zinc and magnesium are discussed in the context of glutamate, brain-derived neurotrophic factor (BDNF) and glycogen synthase kinase-3 (GSK-3) hypotheses. All the available data indicate the importance of zinc and magnesium homeostasis in the psychopathology and therapy of affective disorders.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
},
{
"docid": "ebb43198da619d656c068f2ab1bfe47f",
"text": "Remote data integrity checking (RDIC) enables a server to prove to an auditor the integrity of a stored file. It is a useful technology for remote storage such as cloud storage. The auditor could be a party other than the data owner; hence, an RDIC proof is based usually on publicly available information. To capture the need of data privacy against an untrusted auditor, Hao et al. formally defined “privacy against third party verifiers” as one of the security requirements and proposed a protocol satisfying this definition. However, we observe that all existing protocols with public verifiability supporting data update, including Hao et al.’s proposal, require the data owner to publish some meta-data related to the stored data. We show that the auditor can tell whether or not a client has stored a specific file and link various parts of those files based solely on the published meta-data in Hao et al.’s protocol. In other words, the notion “privacy against third party verifiers” is not sufficient in protecting data privacy, and hence, we introduce “zero-knowledge privacy” to ensure the third party verifier learns nothing about the client’s data from all available information. We enhance the privacy of Hao et al.’s protocol, develop a prototype to evaluate the performance and perform experiment to demonstrate the practicality of our proposal.",
"title": ""
},
{
"docid": "147a6ce22db736f475408d28d0398651",
"text": "Curating labeled training data has become the primary bottleneck in machine learning. Recent frameworks address this bottleneck with generative models to synthesize labels at scale from weak supervision sources. The generative model's dependency structure directly affects the quality of the estimated labels, but selecting a structure automatically without any labeled data is a distinct challenge. We propose a structure estimation method that maximizes the ℓ 1-regularized marginal pseudolikelihood of the observed data. Our analysis shows that the amount of unlabeled data required to identify the true structure scales sublinearly in the number of possible dependencies for a broad class of models. Simulations show that our method is 100× faster than a maximum likelihood approach and selects 1/4 as many extraneous dependencies. We also show that our method provides an average of 1.5 F1 points of improvement over existing, user-developed information extraction applications on real-world data such as PubMed journal abstracts.",
"title": ""
}
] |
scidocsrr
|
ee0d871ee8a1c4dbcd2763aa618e7a19
|
Robust control for line-of-sight stabilization of a two-axis gimbal system
|
[
{
"docid": "feee488a72016554ebf982762d51426e",
"text": "Optical imaging sensors, such as television or infrared cameras, collect information about targets or target regions. It is thus necessary to control the sensor's line-of-sight (LOS) to achieve accurate pointing. Maintaining sensor orientation toward a target is particularly challenging when the imaging sensor is carried on a mobile vehicle or when the target is highly dynamic. Controlling an optical sensor LOS with an inertially stabilized platform (ISP) can meet these challenges.A target tracker is a process, typically involving image processing techniques, for detecting targets in optical imagery. This article describes the use and design of ISPs and target trackers for imaging optical sensors.",
"title": ""
}
] |
[
{
"docid": "c035b514ee694df3179363296ff48e75",
"text": "A new microcrack-based continuous damage model is developed to describe the behavior of brittle geomaterials under compression dominated stress ®elds. The induced damage is represented by a second rank tensor, which re ̄ects density and orientation of microcracks. The damage evolution law is related to the propagation condition of microcracks. Based on micromechanical analyses of sliding wing cracks, the actual microcrack distributions are replaced by an equivalent set of cracks subjected to a macroscopic local tensile stress. The principles of the linear fracture mechanics are used to develop a suitable macroscopic propagation criterion. The onset of microcrack coalescence leading to localization phenomenon and softening behavior is included by using a critical crack length. The constitutive equations are developed by considering that microcrack growth induces an added material ̄exibility. The eective elastic compliance of damaged material is obtained from the de®nition of a particular Gibbs free energy function. Irreversible damage-related strains due to residual opening of microcracks after unloading are also taken into account. The resulting constitutive equations can be arranged to reveal the physical meaning of each model parameter and to determine its value from standard laboratory tests. An explicit expression for the macroscopic eective constitutive tensor (compliance or stiness) makes it possible, in principal, to determine the critical damage intensity at which the localization condition is satis®ed. The proposed model is applied to two typical brittle rocks (a French granite and Tennessee marble). Comparison between test data and numerical simulations show that the proposed model is able to describe main features of mechanical behaviors observed in brittle geomaterials under compressive stresses. Ó 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d9980c59c79374c5b1ee107d6a5c978f",
"text": "A software module named flash translation layer (FTL) running in the controller of a flash SSD exposes the linear flash memory to the system as a block storage device. The effectiveness of an FTL significantly impacts the performance and durability of a flash SSD. In this research, we propose a new FTL called PCFTL (Plane-Centric FTL), which fully exploits plane-level parallelism supported by modern flash SSDs. Its basic idea is to allocate updates onto the same plane where their associated original data resides on so that the write distribution among planes is balanced. Furthermore, it utilizes fast intra-plane copy-back operations to transfer valid pages of a victim block when a garbage collection occurs. We largely extend a validated simulation environment called SSDsim to implement PCFTL. Comprehensive experiments using realistic enterprise-scale workloads are performed to evaluate its performance with respect to mean response time and durability in terms of standard deviation of writes per plane. Experimental results demonstrate that compared with the well-known DFTL, PCFTL improves performance and durability by up to 47 and 80 percent, respectively. Compared with its earlier version (called DLOOP), PCFTL enhances durability by up to 74 percent while delivering a similar I/O performance.",
"title": ""
},
{
"docid": "76c19c70f11244be16248a1b4de2355a",
"text": "We have recently witnessed the emerging of cloud computing on one hand and robotics platforms on the other hand. Naturally, these two visions have been merging to give birth to the Cloud Robotics paradigm in order to offer even more remote services. But such a vision is still in its infancy. Architectures and platforms are still to be defined to efficiently program robots so they can provide different services, in a standardized way masking their heterogeneity. This paper introduces Open Mobile Cloud Robotics Interface (OMCRI), a Robot-as-a-Service vision based platform, which offers a unified easy access to remote heterogeneous mobile robots. OMCRI encompasses an extension of the Open Cloud Computing Interface (OCCI) standard and a gateway hosting mobile robot resources. We then provide an implementation of OMCRI based on the open source model-driven Eclipse-based OCCIware tool chain and illustrates its use for three off-the-shelf mobile robots: Lego Mindstorm NXT, Turtlebot, and Parrot AR. Drone.",
"title": ""
},
{
"docid": "1854e443a1b4b0ba9762c7364bbe5c69",
"text": "In this paper, we describe our investigation of traces of naturally occurring emotions in electrical brain signals, that can be used to build interfaces that respond to our emotional state. This study confirms a number of known affective correlates in a realistic, uncontrolled environment for the emotions of valence (or pleasure), arousal and dominance: (1) a significant decrease in frontal power in the theta range is found for increasingly positive valence, (2) a significant frontal increase in power in the alpha range is associated with increasing emotional arousal, (3) a significant right posterior power increase in the delta range correlates with increasing arousal and (4) asymmetry in power in the lower alpha bands correlates with self-reported valence. Furthermore, asymmetry in the higher alpha bands correlates with self-reported dominance. These last two effects provide a simple measure for subjective feelings of pleasure and feelings of control.",
"title": ""
},
{
"docid": "9ff6c86b2920c10d33e1b3d52fbc92d8",
"text": "In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data-modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided a weakly established model based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Meanwhile, analyzing and learning a large amount of tfMRI data from a variety of subjects has been shown to be very demanding but yet challenging even with technological advances in computational hardware. Given the Convolutional Neural Network (CNN), a robust method for learning high-level abstractions from low-level data such as tfMRI time series, in this work we propose a fast and scalable novel framework for distributed deep Convolutional Autoencoder model. This model aims to both learn the complex hierarchical structure of the tfMRI data and to leverage the processing power of multiple GPUs in a distributed fashion. To implement such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow library, leveraging from a very large cluster of GPU machines. Experimental data from applying the model on the Human Connectome Project (HCP) show that the proposed model is efficient and scalable toward tfMRI big data analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data in the future.",
"title": ""
},
{
"docid": "28de66e48c6bb6341000f59eadd2e767",
"text": "This paper presents a speech emotion recognition system using a recurrent neural network (RNN) model trained by an efficient learning algorithm. The proposed system takes into account the long-range contextual effect and the uncertainty of emotional label expressions. To extract high-level representation of emotional states with regard to its temporal dynamics, a powerful learning method with a bidirectional long short-term memory (BLSTM) structure is adopted. To overcome the uncertainty of emotional labels, such that all frames in the same utterance are mapped to the same emotional label, it is assumed that the label of each frame is regarded as a sequence of random variables. The sequences are then trained by the proposed learning algorithm. The weighted accuracy of the proposed emotion recognition system is improved up to 12% compared to the DNNELM-based emotion recognition system used as a baseline.",
"title": ""
},
{
"docid": "a2fcc3734115b76ca562dc190ebc5349",
"text": "Image inpainting is concerned with the completion of missing data in an image. When the area to inpaint is relatively large, this problem becomes challenging. In these cases, traditional methods based on patch models and image propagation are limited, since they fail to consider a global perspective of the problem. In this letter, we employ a recently proposed dictionary learning framework, coined Trainlets, to design large adaptable atoms from a corpus of various datasets of face images by leveraging the online sparse dictionary learning algorithm. We, therefore, formulate the inpainting task as an inverse problem with a sparse-promoting prior based on the learned global model. Our results show the effectiveness of our scheme, obtaining much more plausible results than competitive methods.",
"title": ""
},
{
"docid": "b06679e91a8d68b8535054e36c333a82",
"text": "With its design concept of cross-platform portability, OpenCL can be used not only on GPUs (for which it is quite popular), but also on CPUs. Whether porting GPU programs to CPUs, or simply writing new code for CPUs, using OpenCL brings up the performance issue, usually raised in one of two forms: \"OpenCL is not performance portable!\" or \"Why using OpenCL for CPUs after all?!\". We argue that both issues can be addressed by a thorough study of the factors that impact the performance of OpenCL on CPUs. This analysis is the focus of this paper. Specifically, starting from the two main architectural mismatches between many-core CPUs and the OpenCL platform-parallelism granularity and the memory model-we identify eight such performance \"traps\" that lead to performance degradation in OpenCL for CPUs. Using multiple code examples, from both synthetic and real-life benchmarks, we quantify the impact of these traps, showing how avoiding them can give up to 10 times better performance. Furthermore, we point out that the solutions we provide for avoiding these traps are simple and generic code transformations, which can be easily adopted by either programmers or automated tools. Therefore, we conclude that a certain degree of OpenCL inter-platform performance portability, while indeed not a given, can be achieved by simple and generic code transformations.",
"title": ""
},
{
"docid": "a9b159f9048c1dadb941e1462ba5826f",
"text": "Distributed data processing is becoming a reality. Businesses want to do it for many reasons, and they often must do it in order to stay competitive. While much of the infrastructure for distributed data processing is already there (e.g., modern network technology), a number of issues make distributed data processing still a complex undertaking: (1) distributed systems can become very large, involving thousands of heterogeneous sites including PCs and mainframe server machines; (2) the state of a distributed system changes rapidly because the load of sites varies over time and new sites are added to the system; (3) legacy systems need to be integrated—such legacy systems usually have not been designed for distributed data processing and now need to interact with other (modern) systems in a distributed environment. This paper presents the state of the art of query processing for distributed database and information systems. The paper presents the “textbook” architecture for distributed query processing and a series of techniques that are particularly useful for distributed database systems. These techniques include special join techniques, techniques to exploit intraquery paralleli sm, techniques to reduce communication costs, and techniques to exploit caching and replication of data. Furthermore, the paper discusses different kinds of distributed systems such as client-server, middleware (multitier), and heterogeneous database systems, and shows how query processing works in these systems.",
"title": ""
},
{
"docid": "f835074be8ff74361f1ea700ae737ace",
"text": "Exploring community is fundamental for uncovering the connections between structure and function of complex networks and for practical applications in many disciplines such as biology and sociology. In this paper, we propose a TTR-LDA-Community model which combines the Latent Dirichlet Allocation model (LDA) and the Girvan-Newman community detection algorithm with an inference mechanism. The model is then applied to data from Delicious, a popular social tagging system, over the time period of 2005-2008. Our results show that 1) users in the same community tend to be interested in similar set of topics in all time periods; and 2) topics may divide into several sub-topics and scatter into different communities over time. We evaluate the effectiveness of our model and show that the TTR-LDA-Community model is meaningful for understanding communities and outperforms TTR-LDA and LDA models in tag prediction.",
"title": ""
},
{
"docid": "6931f8727f2c4e2aab19c94bcd783f59",
"text": "The steady-state and dynamic performance of a stator voltage-controlled current source inverter (CSI) induction motor drive are presented. Commutation effects are neglected and the analytical results are based on the fundamental component. A synchronously rotating reference frame linearized model in terms of a set of nondimensional parameters, based on the rotor transient time constant, is developed. It is shown that the control scheme is capable of stabilizing the drive over a region identical to the statically stable region of a conventional voltage-fed induction motor. A simple approximate expression for the drive dominant poles under no-load conditions and graphical representations of the drive dynamics under load conditions are presented. The effect of parameter variations on the drive dynamic response can be evaluated from these results. An analog simulation of the drive is developed, and the results confirm the small signal analysis of the drive system. In addition the steady-state results of the analog simulation are compared with experimental results, as well as with corresponding values obtained from a stator referred equivalent circuit. The comparison indicates good correspondence under load conditions and the limitation of applying the equivalent circuit for no-load conditions without proper recognition of the system losses.",
"title": ""
},
{
"docid": "f8c7c5c9ccc6efa5e6e285b399ede379",
"text": "Urinary tract infections are the most common bacterial infections in women. Most urinary tract infections are acute uncomplicated cystitis. Identifiers of acute uncomplicated cystitis are frequency and dysuria in an immunocompetent woman of childbearing age who has no comorbidities or urologic abnormalities. Physical examination is typically normal or positive for suprapubic tenderness. A urinalysis, but not urine culture, is recommended in making the diagnosis. Guidelines recommend three options for first-line treatment of acute uncomplicated cystitis: fosfomycin, nitrofurantoin, and trimethoprim/sulfamethoxazole (in regions where the prevalence of Escherichia coli resistance does not exceed 20 percent). Beta-lactam antibiotics, amoxicillin/clavulanate, cefaclor, cefdinir, and cefpodoxime are not recommended for initial treatment because of concerns about resistance. Urine cultures are recommended in women with suspected pyelonephritis, women with symptoms that do not resolve or that recur within two to four weeks after completing treatment, and women who present with atypical symptoms.",
"title": ""
},
{
"docid": "87033772450237ce38d92e7ab6dba616",
"text": "This paper presents the first functional SiC super junction devices, SiC SJ Schottky diodes, which substantially improve the trade-off between the breakdown voltage and specific on-resistance in SiC power devices. Processes for fabricating SJ structures by using a trench-etching-and-sidewall-implant method has been developed and a functional SJ Schottky diode has been demonstrated based on this processing method. The measured cell blocking voltage was 1350V, which achieves 95% of the simulated blocking voltage for the ideally-charge-balanced SJ structure. The measured device specific on-resistance was 0.92mΩ·cm2. The SJ drift region specific on-resistance as low as 0.32 mΩ·cm2 was obtained after subtracting the substrate resistance.",
"title": ""
},
{
"docid": "188322d93cb3242ccab716e810faaaac",
"text": "Citation relationship between scientific publications has been successfully used for scholarly bibliometrics, information retrieval and data mining tasks, and citation-based recommendation algorithms are well documented. While previous studies investigated citation relations from various viewpoints, most of them share the same assumption that, if paper1 cites paper2 (or author1 cites author2), they are connected, regardless of citation importance, sentiment, reason, topic, or motivation. However, this assumption is oversimplified. In this study, we employ an innovative \"context-rich heterogeneous network\" approach, which paves a new way for citation recommendation task. In the network, we characterize 1) the importance of citation relationships between citing and cited papers, and 2) the topical citation motivation. Unlike earlier studies, the citation information, in this paper, is characterized by citation textual contexts extracted from the full-text citing paper. We also propose algorithm to cope with the situation when large portion of full-text missing information exists in the bibliographic repository. Evaluation results show that, context-rich heterogeneous network can significantly enhance the citation recommendation performance.",
"title": ""
},
{
"docid": "c4ab0af91f664aa6d7674f986608ab06",
"text": "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.",
"title": ""
},
{
"docid": "5894fd2d3749df78afb49b27ad26f459",
"text": "Information security policy compliance (ISP) is one of the key concerns that face organizations today. Although technical and procedural measures help improve information security, there is an increased need to accommodate human, social and organizational factors. Despite the plethora of studies that attempt to identify the factors that motivate compliance behavior or discourage abuse and misuse behaviors, there is a lack of studies that investigate the role of ethical ideology per se in explaining compliance behavior. The purpose of this research is to investigate the role of ethics in explaining Information Security Policy (ISP) compliance. In that regard, a model that integrates behavioral and ethical theoretical perspectives is developed and tested. Overall, analyses indicate strong support for the validation of the proposed theoretical model.",
"title": ""
},
{
"docid": "16a8fc39efe95c05a25deba4da6aa806",
"text": "Although effective treatments for obsessive-compulsive disorder (OCD) exist, there are significant barriers to receiving evidence-based care. Mobile health applications (Apps) offer a promising way of overcoming these barriers by increasing access to treatment. The current study investigated the feasibility, acceptability, and preliminary efficacy of LiveOCDFree, an App designed to help OCD patients conduct exposure and response prevention (ERP). Twenty-one participants with mild to moderate symptoms of OCD were enrolled in a 12-week open trial of App-guided self-help ERP. Self-report assessments of OCD, depression, anxiety, and quality of life were completed at baseline, mid-treatment, and post-treatment. App-guided ERP was a feasible and acceptable self-help intervention for individuals with OCD, with high rates of retention and satisfaction. Participants reported significant improvement in OCD and anxiety symptoms pre- to post-treatment. Findings suggest that LiveOCDFree is a feasible and acceptable self-help intervention for OCD. Preliminary efficacy results are encouraging and point to the potential utility of mobile Apps in expanding the reach of existing empirically supported treatments.",
"title": ""
},
{
"docid": "09d7bb1b4b976e6d398f20dc34fc7678",
"text": "A compact wideband quarter-wave transformer using microstrip lines is presented. The design relies on replacing a uniform microstrip line with a multi-stage equivalent circuit. The equivalent circuit is a cascade of either T or π networks. Design equations for both types of equivalent circuits have been derived. A quarter-wave transformer operating at 1 GHz is implemented. Simulation results indicate a −15 dB impedance bandwidth exceeding 64% for a 3-stage network with less than 0.25 dB of attenuation within the bandwidth. Both types of equivalent circuits provide more than 40% compaction with proper selection of components. Measured results for the fabricated unit deviate within acceptable limits. The designed quarter-wave transformer may be used to replace 90° transmission lines in various passive microwave components.",
"title": ""
},
{
"docid": "1acb7ca89eab0a0b4306aa2ebb844018",
"text": "This paper describes work in progress. Our research is focused on efficient construction of effective models for spam detection. Clustering messages allows for efficient labeling of a representative sample of messages for learning a spam detection model using a Random Forest for classification and active learning for refining the classification model. Results are illustrated for the 2007 TREC Public Spam Corpus. The area under the Receiver Operating Characteristic (ROC) curve is competitive with other solutions while requiring much fewer labeled training examples.",
"title": ""
},
{
"docid": "ed6aec69b76444f877343277865e2fd0",
"text": "Abstract Context: In the context of exploring the art, science and engineering of programming, the question of which programming languages should be taught first has been fiercely debated since computer science teaching started in universities. Failure to grasp programming readily almost certainly implies failure to progress in computer science. Inquiry:What first programming languages are being taught? There have been regular national-scale surveys in Australia and New Zealand, with the only US survey reporting on a small subset of universities. This the first such national survey of universities in the UK. Approach: We report the results of the first survey of introductory programming courses (N = 80) taught at UK universities as part of their first year computer science (or related) degree programmes, conducted in the first half of 2016. We report on student numbers, programming paradigm, programming languages and environment/tools used, as well as the underpinning rationale for these choices. Knowledge: The results in this first UK survey indicate a dominance of Java at a time when universities are still generally teaching students who are new to programming (and computer science), despite the fact that Python is perceived, by the same respondents, to be both easier to teach as well as to learn. Grounding: We compare the results of this survey with a related survey conducted since 2010 (as well as earlier surveys from 2001 and 2003) in Australia and New Zealand. Importance: This survey provides a starting point for valuable pedagogic baseline data for the analysis of the art, science and engineering of programming, in the context of substantial computer science curriculum reform in UK schools, as well as increasing scrutiny of teaching excellence and graduate employability for UK universities.",
"title": ""
}
] |
scidocsrr
|
bd275a485fe587b005bb5af98764bbd2
|
Forensic hash for multimedia information
|
[
{
"docid": "bad3eec42d75357aca75fa993ab49e52",
"text": "By robust image hashing (RIH), a digital image is transformed into a short binary string, of fixed length, called hash value, hash code or simply hash. Other terms used occasionally for the hash are digital signature, fingerprint, message digest or label. The hash is attached to the image, inserted by watermarking or transmitted by side channels. The hash is robust to image low distortion, fragile to image tempering and have low collision probability. The main applications of RIH are in image copyright protection, content authentication and database indexing. The goal of copyright protections is to prevent possible illegal usage of digital images by identifying the image even when its pixels are distorted by small tempering or by common manipulation (transmission, lossy compression etc.). In such cases, the image is still identifiable by the hash, which is robust to low distortions (Khelifi & Jiang, 2010). The content authentication is today, one of the main issues in digital image security. The image content can be easily modified by using commercial image software. A common example is the object insertion or removal. Although visually undetectable, such modifications are put into evidence by the hash, which is fragile to image tempering (Zhao & al., 2013). Finally, in large databases management, the RIH can be an effective solution for image efficient retrieval, by replacing the manual annotation with the hash, which is automated extracted (Lin & al., 2001). The properties that recommend the hash for indexing are the low collision probability and the content-based features. The origins of the hash lay in computer science, where one of the earliest applications was the efficient search of large tables. Here, the hash – calculated by a hash function – serves as index for the data recorded in the table. Since, in general, such functions map more data strings to the same hash, the hash designates in fact a bucket of records, helping to narrow the search. Although very efficient in table searching, these hashes are not appropriate for file authentication, where the low collision probability is of high concern. The use in authentication applications has led to the development of the cryptographic hashing, a branch including hash functions with the following special properties: preimage resistance (by knowing the hash it is very difficult to find out the file that generated it), second image resistance (given a file, it is very difficult to find another with the same hash) and collision resistance (it is very difficult to find two files with the same hash). They allow the hash to withstand the cryptanalytic attacks. The development of multimedia applications in the last two decades has brought central stage the digital images. The indexing or authentication of these data has been a new challenge for hashing because of a property that might be called perceptible identity. It could be defined as follows: although the image pixels undergo slight modification during ordinary operations, the image is perceived as being the same. The perceptual similar images must have similar hashes. The hashing complying with this demand is called robust or perceptual. Specific methods have had to be developed in order to obtain hashes tolerant to distortions, inherent to image conventional handling like archiving, scaling, rotation, cropping, noise filtering, print-and-scan etc., called in one word non malicious attacks. These methods are grouped under the generic name of RIH. In this article, we define the main terms used in RIH and discuss the solutions commonly used for designing a RIH scheme. The presentation will be done in the light of robust hash inherent properties: randomness, independence and robustness.",
"title": ""
},
{
"docid": "099371952baecb790cf0600ae3b26e41",
"text": "Digital watermarks have recently been proposed for authentication of both video data and still images and for integrity verification of visual multimedia. In such applications, the watermark has to depend on a secret key and on the original image. It is important that the dependence on the key be sensitive, while the dependence on the image be continuous (robust). Both requirements can be satisfied using special image digest functions that return the same bit-string for a whole class of images derived from an original image using common processing operations. It is further required that two completely different images produce completely different bit-strings. In this paper, we discuss methods how such robust hash functions can be built. We describe an algorithm and evaluate its performance. We also show how the hash bits As another application, the robust image digest can be used as a search index for an efficient image database",
"title": ""
}
] |
[
{
"docid": "ad58798807256cff2eff9d3befaf290a",
"text": "Centrality indices are an essential concept in network analysis. For those based on shortest-path distances the computation is at least quadratic in the number of nodes, since it usually involves solving the single-source shortest-paths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices. ∗Research supported in part by DFG under grant Br 2158/2-3",
"title": ""
},
{
"docid": "8a2fb295b5d859bd6f376b7beeb5bfa5",
"text": "Spanning functions from the simplest reflex arc to complex cognitive processes, neural circuits have diverse functional roles. In the cerebral cortex, functional domains such as visual processing, attention, memory, and cognitive control rely on the development of distinct yet interconnected sets of anatomically distributed cortical and subcortical regions. The developmental organization of these circuits is a remarkably complex process that is influenced by genetic predispositions, environmental events, and neuroplastic responses to experiential demand that modulates connectivity and communication among neurons, within individual brain regions and circuits, and across neural pathways. Recent advances in neuroimaging and computational neurobiology, together with traditional investigational approaches such as histological studies and cellular and molecular biology, have been invaluable in improving our understanding of these developmental processes in humans in both health and illness. To contextualize the developmental origins of a wide array of neuropsychiatric illnesses, this review describes the development and maturation of neural circuits from the first synapse through critical periods of vulnerability and opportunity to the emergent capacity for cognitive and behavioral regulation, and finally the dynamic interplay across levels of circuit organization and developmental epochs.",
"title": ""
},
{
"docid": "869f52723b215ba8dc5c4c614b2c79a6",
"text": "Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.",
"title": ""
},
{
"docid": "cd0c68845416f111307ae7e14bfb7491",
"text": "Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals' activity space. First, a survey was conducted to collect individuals' daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment.",
"title": ""
},
{
"docid": "9813df16b1852cf6d843ff3e1c67fa88",
"text": "Traumatic neuromas are tumors resulting from hyperplasia of axons and nerve sheath cells after section or injury to the nervous tissue. We present a case of this tumor, confirmed by anatomopathological examination, in a male patient with history of circumcision. Knowledge of this entity is very important in achieving the differential diagnosis with other lesions that affect the genital area such as condyloma acuminata, bowenoid papulosis, lichen nitidus, sebaceous gland hyperplasia, achrochordon and pearly penile papules.",
"title": ""
},
{
"docid": "40e34327594857c1337ae72c2b50b1a2",
"text": "Very few cybercrimes are committed by females. Therefore, there has been a dearth of research on this topic. It is important that we understand the relationships between gender and cybercrime, to inform crime prevention strategies and understand the particular problems female offenders may face. This research draws from extensive data gathered in relation to cybercrime offenders, both male and female. The research explores the types of roles female computer crime offenders take on, and their social experiences, finding that, compared to males, they experience more adverse life events. Reasons for the lack of female involvement in cybercrime include the barriers female face when engaging with the predominantly masculine online communities that are important for learning and sharing information.",
"title": ""
},
{
"docid": "1919e173d8bfbff038837322794f0ca1",
"text": "In this tutorial, we provided a comprehensive overview of coalitional game theory, and its usage in wireless and communication networks. For this purpose, we introduced a novel classification of coalitional games by grouping the sparse literature into three distinct classes of games: canonical coalitional games, coalition formation games, and coalitional graph games. For each class, we explained in details the fundamental properties, discussed the main solution concepts, and provided an in-depth analysis of the methodologies and approaches for using these games in both game theory and communication applications. The presented applications have been carefully selected from a broad range of areas spanning a diverse number of research problems. The tutorial also sheds light on future opportunities for using the strong analytical tool of coalitional games in a number of applications. In a nutshell, this article fills a void in existing communications literature, by providing a novel tutorial on applying coalitional game theory in communication networks through comprehensive theory and technical details as well as through practical examples drawn from both game theory and communication application.",
"title": ""
},
{
"docid": "6f94fd155f3689ab1a6b242243b13e09",
"text": "Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.",
"title": ""
},
{
"docid": "7c3ea2f5dc309058b95a6070fcb35266",
"text": "The olive psyllid, Euphyllura phillyreae Foerster is one of the most destructive pests on buds and flowers of olive tree (Olea europaea L.) in May when the olive growers cannot apply any insecticides against the pest. Temperature-dependent development of the psyllid was studied at constant temperatures ranged 16–26°C. A degree-day (DD) model was developed to predict the larval emergence using the weekly cumulative larval counts and daily mean temperatures. Linear regression analysis estimated a lower developmental threshold of 4.1 and 4.3°C and a thermal constant of 164.17 and 466.13 DD for development of egg and larva, respectively. The cumulative larval counts of E. phillyreae approximated by probit transformation were plotted against time, expressed as the sum of DD above 4.3°C, the starting date when the olive tree phenology was the period of flower cluster initiation. A linear model was used to describe the relationship of DDs and probit values of larval emergence patterns of E. phillyreae and predicted that 10, 50 and 95% emergence of the larvae required 235.81, 360.22 and 519.93 DD, respectively, with errors of 1–3 days compared to observed values. Based on biofix depends the development of olive tree phenology; the DD model can be used as a forecasting method for proper timing of insecticide applications against E. phillyreae larvae in olive groves.",
"title": ""
},
{
"docid": "0dad686449811de611e9c55dbc9fc255",
"text": "Neural networks with tree-based sentence encoders have shown better results on many downstream tasks. Most of existing tree-based encoders adopt syntactic parsing trees as the explicit structure prior. To study the effectiveness of different tree structures, we replace the parsing trees with trivial trees (i.e., binary balanced tree, left-branching tree and right-branching tree) in the encoders. Though trivial trees contain no syntactic information, those encoders get competitive or even better results on all of the ten downstream tasks we investigated. This surprising result indicates that explicit syntax guidance may not be the main contributor to the superior performances of tree-based neural sentence modeling. Further analysis show that tree modeling gives better results when crucial words are closer to the final representation. Additional experiments give more clues on how to design an effective tree-based encoder. Our code is opensource and available at https://github. com/ExplorerFreda/TreeEnc.",
"title": ""
},
{
"docid": "214be33e744fc211174a8164e26e2f36",
"text": "On-chip communication remains as a key research issue at the gates of the manycore era. In response to this, novel interconnect technologies have opened the door to new Network-on-Chip (NoC) solutions towards greater scalability and architectural flexibility. Particularly, wireless on-chip communication has garnered considerable attention due to its inherent broadcast capabilities, low latency, and system-level simplicity. This work presents OrthoNoC, a wired-wireless architecture that differs from existing proposals in that both network planes are decoupled and driven by traffic steering policies enforced at the network interfaces. With these and other design decisions, OrthoNoC seeks to emphasize the ordered broadcast advantage offered by the wireless technology. The performance and cost of OrthoNoC are first explored using synthetic traffic, showing substantial improvements with respect to other wired-wireless designs with a similar number of antennas. Then, the applicability of OrthoNoC in the multiprocessor scenario is demonstrated through the evaluation of a simple architecture that implements fast synchronization via ordered broadcast transmissions. Simulations reveal significant execution time speedups and communication energy savings for 64-threaded benchmarks, proving that the value of OrthoNoC goes beyond simply improving the performance of the on-chip interconnect.",
"title": ""
},
{
"docid": "4bc74a746ef958a50bb8c542aa25860f",
"text": "A new approach to super resolution line spectrum estimation in both temporal and spatial domain using a coprime pair of samplers is proposed. Two uniform samplers with sample spacings MT and NT are used where M and N are coprime and T has the dimension of space or time. By considering the difference set of this pair of sample spacings (which arise naturally in computation of second order moments), sample locations which are O(MN) consecutive multiples of T can be generated using only O(M + N) physical samples. In order to efficiently use these O(MN) virtual samples for super resolution spectral estimation, a novel algorithm based on the idea of spatial smoothing is proposed, which can be used for estimating frequencies of sinusoids buried in noise as well as for estimating Directions-of-Arrival (DOA) of impinging signals on a sensor array. This technique allows us to construct a suitable positive semidefinite matrix on which subspace based algorithms like MUSIC can be applied to detect O(MN) spectral lines using only O(M + N) physical samples.",
"title": ""
},
{
"docid": "917ce5ca904c8866ddc84c113dc93b91",
"text": "Traditional media outlets are known to report political news in a biased way, potentially affecting the political beliefs of the audience and even altering their voting behaviors. Therefore, tracking bias in everyday news and building a platform where people can receive balanced news information is important. We propose a model that maps the news media sources along a dimensional dichotomous political spectrum using the co-subscriptions relationships inferred by Twitter links. By analyzing 7 million follow links, we show that the political dichotomy naturally arises on Twitter when we only consider direct media subscription. Furthermore, we demonstrate a real-time Twitter-based application that visualizes an ideological map of various media sources.",
"title": ""
},
{
"docid": "a7dff1f19690e31f90e0fa4a85db5d97",
"text": "This paper presents BOOM version 2, an updated version of the Berkeley Out-of-Order Machine first presented in [3]. The design exploration was performed through synthesis, place and route using the foundry-provided standard-cell library and the memory compiler in the TSMC 28 nm HPM process (high performance mobile). BOOM is an open-source processor that implements the RV64G RISC-V Instruction Set Architecture (ISA). Like most contemporary high-performance cores, BOOM is superscalar (able to execute multiple instructions per cycle) and out-oforder (able to execute instructions as their dependencies are resolved and not restricted to their program order). BOOM is implemented as a parameterizable generator written using the Chisel hardware construction language [2] that can used to generate synthesizable implementations targeting both FPGAs and ASICs. BOOMv2 is an update in which the design effort has been informed by analysis of synthesized, placed and routed data provided by a contemporary industrial tool flow. We also had access to standard singleand dual-ported memory compilers provided by the foundry, allowing us to explore design trade-offs using different SRAM memories and comparing against synthesized flip-flop arrays. The main distinguishing features of BOOMv2 include an updated 3-stage front-end design with a bigger set-associative Branch Target Buffer (BTB); a pipelined register rename stage; split floating point and integer register files; a dedicated floating point pipeline; separate issue windows for floating point, integer, and memory micro-operations; and separate stages for issue-select and register read. Managing the complexity of the register file was the largest obstacle to improving BOOM’s clock frequency. We spent considerable effort on placing-and-routing a semi-custom 9port register file to explore the potential improvements over a fully synthesized design, in conjunction with microarchitectural techniques to reduce the size and port count of the register file. BOOMv2 has a 37 fanout-of-four (FO4) inverter delay after synthesis and 50 FO4 after place-and-route, a 24% reduction from BOOMv1’s 65 FO4 after place-and-route. Unfortunately, instruction per cycle (IPC) performance drops up to 20%, mostly due to the extra latency between load instructions and dependent instructions. However, the new BOOMv2 physical design paves the way for IPC recovery later. BOOMv1-2f3i int/idiv/fdiv",
"title": ""
},
{
"docid": "c6574f7f4b24adcc25ab4d84d1e8b898",
"text": "Configuration problems are not only prevalent, but also severely impair the reliability of today's system software. One fundamental reason is the ever-increasing complexity of configuration, reflected by the large number of configuration parameters (\"knobs\"). With hundreds of knobs, configuring system software to ensure high reliability and performance becomes a daunting, error-prone task. This paper makes a first step in understanding a fundamental question of configuration design: \"do users really need so many knobs?\" To provide the quantitatively answer, we study the configuration settings of real-world users, including thousands of customers of a commercial storage system (Storage-A), and hundreds of users of two widely-used open-source system software projects. Our study reveals a series of interesting findings to motivate software architects and developers to be more cautious and disciplined in configuration design. Motivated by these findings, we provide a few concrete, practical guidelines which can significantly reduce the configuration space. Take Storage-A as an example, the guidelines can remove 51.9% of its parameters and simplify 19.7% of the remaining ones with little impact on existing users. Also, we study the existing configuration navigation methods in the context of \"too many knobs\" to understand their effectiveness in dealing with the over-designed configuration, and to provide practices for building navigation support in system software.",
"title": ""
},
{
"docid": "a7d9ac415843146b82139e50edf4ccf2",
"text": "Recommender Systems (RSs) are software tools and techniques providing suggestions of relevant items to users. These systems have received increasing attention from both academy and industry since the 90’s, due to a variety of practical applications as well as complex problems to solve. Since then, the number of research papers published has increased significantly in many application domains (books, documents, images, movies, music, shopping, TV programs, and others). One of these domains has our attention in this paper due to the massive proliferation of televisions (TVs) with computational and network capabilities and due to the large amount of TV content and TV-related content available on the Web. With the evolution of TVs and RSs, the diversity of recommender systems for TV has increased substantially. In this direction, it is worth mentioning that we consider “recommender systems for TV” as those that make recommendations of both TV-content and any content related to TV. Due to this diversity, more investigation is necessary because research on recommender systems for TV domain is still broader and less mature than in other research areas. Thus, this literature review (LR) seeks to classify, synthesize, and present studies according to different perspectives of RSs in the television domain. For that, we initially identified, from the scientific literature, 282 relevant papers published from 2003 to May, 2015. The papers were then categorized and discussed according to different research and development perspectives: recommended item types, approaches, algorithms, architectural models, output devices, user profiling and evaluation. The obtained results can be useful to reveal trends and opportunities for both researchers and practitioners in the area.",
"title": ""
},
{
"docid": "aba638a83116131a62dcce30a7470252",
"text": "A general method is proposed to automatically generate a DfT solution aiming at the detection of catastrophic faults in analog and mixed-signal integrated circuits. The approach consists in modifying the topology of the circuit by pulling up (down) nodes and then probing differentiating node voltages. The method generates a set of optimal hardware implementations addressing the multi-objective problem such that the fault coverage is maximized and the silicon overhead is minimized. The new method was applied to a real-case industrial circuit, demonstrating a nearly 100 percent coverage at the expense of an area increase of about 5 percent.",
"title": ""
},
{
"docid": "166dbfefe4323c5174227a4fd72b1546",
"text": "As interest grows in the use of linguistically annotated corpora in research and teaching of foreign languages and literature, treebanks of various historical texts have been developed. We introduce the first large-scale dependency treebank for Classical Chinese literature. Derived from the Stanford dependency types, it consists of over 32K characters drawn from a collection of poems written in the 8 th century CE. We report on the design of new dependency relations, discuss aspects of the annotation process and evaluation, and illustrate its use in a study of parallelism in Classical Chinese poetry.",
"title": ""
},
{
"docid": "87737f028cf03a360a3e7affe84c9bc9",
"text": "This article provides an empirical statistical analysis and discussion of the predictive abilities of selected customer lifetime value (CLV) models that could be used in online shopping within e-commerce business settings. The comparison of CLV predictive abilities, using selected evaluation metrics, is made on selected CLV models: Extended Pareto/NBD model (EP/NBD), Markov chain model and Status Quo model. The article uses six online store datasets with annual revenues in the order of tens of millions of euros for the comparison. The EP/NBD model has outperformed other selected models in a majority of evaluation metrics and can be considered good and stable for non-contractual relations in online shopping. The implications for the deployment of selected CLV models in practice, as well as suggestions for future research, are also discussed.",
"title": ""
}
] |
scidocsrr
|
0ab5807f327a31b0e377e1510445b1fd
|
Processing performance on Apache Pig, Apache Hive and MySQL cluster
|
[
{
"docid": "3cab403ffab3e44252174ab5d7d985f8",
"text": "A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. While MapReduce is used in many areas where massive data analysis is required, there are still debates on its performance, efficiency per node, and simple abstraction. This survey intends to assist the database and open source communities in understanding various technical aspects of the MapReduce framework. In this survey, we characterize the MapReduce framework and discuss its inherent pros and cons. We then introduce its optimization strategies reported in the recent literature. We also discuss the open issues and challenges raised on parallel data analysis with MapReduce.",
"title": ""
},
{
"docid": "cd35602ecb9546eb0f9a0da5f6ae2fdf",
"text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [3] is a popular open-source map-reduce implementation which is being used as an alternative to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language HiveQL, which are compiled into map-reduce jobs executed on Hadoop. In addition, HiveQL supports custom map-reduce scripts to be plugged into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog, Hive-Metastore, containing schemas and statistics, which is useful in data exploration and query optimization. In Facebook, the Hive warehouse contains several thousand tables with over 700 terabytes of data and is being used extensively for both reporting and ad-hoc analyses by more than 100 users. The rest of the paper is organized as follows. Section 2 describes the Hive data model and the HiveQL language with an example. Section 3 describes the Hive system architecture and an overview of the query life cycle. Section 4 provides a walk-through of the demonstration. We conclude with future work in Section 5.",
"title": ""
},
{
"docid": "25adc988a57d82ae6de7307d1de5bf71",
"text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [1] is a popular open-source map-reduce implementation which is being used in companies like Yahoo, Facebook etc. to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language - HiveQL, which are compiled into map-reduce jobs that are executed using Hadoop. In addition, HiveQL enables users to plug in custom map-reduce scripts into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog - Metastore - that contains schemas and statistics, which are useful in data exploration, query optimization and query compilation. In Facebook, the Hive warehouse contains tens of thousands of tables and stores over 700TB of data and is being used extensively for both reporting and ad-hoc analyses by more than 200 users per month.",
"title": ""
}
] |
[
{
"docid": "8086d70f97bd300002bb4ef7e60e8f9c",
"text": "In this paper, we present and investigate a model for solid tumor growth that incorporates features of the tumor microenvironment. Using analysis and nonlinear numerical simulations, we explore the effects of the interaction between the genetic characteristics of the tumor and the tumor microenvironment on the resulting tumor progression and morphology. We find that the range of morphological responses can be placed in three categories that depend primarily upon the tumor microenvironment: tissue invasion via fragmentation due to a hypoxic microenvironment; fingering, invasive growth into nutrient rich, biomechanically unresponsive tissue; and compact growth into nutrient rich, biomechanically responsive tissue. We found that the qualitative behavior of the tumor morphologies was similar across a broad range of parameters that govern the tumor genetic characteristics. Our findings demonstrate the importance of the impact of microenvironment on tumor growth and morphology and have important implications for cancer therapy. In particular, if a treatment impairs nutrient transport in the external tissue (e.g., by anti-angiogenic therapy) increased tumor fragmentation may result, and therapy-induced changes to the biomechanical properties of the tumor or the microenvironment (e.g., anti-invasion therapy) may push the tumor in or out of the invasive fingering regime.",
"title": ""
},
{
"docid": "c47f7e2128c89173d8a75271d0a488ff",
"text": "Dependence on computers to store and process sensitive information has made it necessary to secure them from intruders. A behavioral biometric such as keystroke dynamics which makes use of the typing cadence of an individual can be used to strengthen existing security techniques effectively and cheaply. Due to the ballistic (semi-autonomous) nature of the typing behavior it is difficult to impersonate, making it useful as a biometric. Therefore in this paper, we provide a basic background of the psychological basis behind the use of keystroke dynamics. We also discuss the data acquisition methods, approaches and the performance of the methods used by researchers on standard computer keyboards. In this survey, we find that the use and acceptance of this biometric could be increased by development of standardized databases, assignment of nomenclature for features, development of common data interchange formats, establishment of protocols for evaluating methods, and resolution of privacy issues.",
"title": ""
},
{
"docid": "de8f5656f17151c43e2454aa7b8f929f",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading concrete mathematics a foundation for computer science is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "d071c70b85b10a62538d73c7272f5d99",
"text": "The Amaryllidaceae alkaloids represent a large (over 300 alkaloids have been isolated) and still expanding group of biogenetically related isoquinoline alkaloids that are found exclusively in plants belonging to this family. In spite of their great variety of pharmacological and/or biological properties, only galanthamine is used therapeutically. First isolated from Galanthus species, this alkaloid is a long-acting, selective, reversible and competitive inhibitor of acetylcholinesterase, and is used for the treatment of Alzheimer’s disease. Other Amaryllidaceae alkaloids of pharmacological interest will also be described in this chapter.",
"title": ""
},
{
"docid": "2ebb21cb1c6982d2d3839e2616cac839",
"text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.",
"title": ""
},
{
"docid": "ac5319160d1444ab688a90f9ccf03c45",
"text": "In this paper we present a novel vision-based markerless hand pose estimation scheme with the input of depth image sequences. The proposed scheme exploits both temporal constraints and spatial features of the input sequence, and focuses on hand parsing and 3D fingertip localization for hand pose estimation. The hand parsing algorithm incorporates a novel spatial-temporal feature into a Bayesian inference framework to assign the correct label to each image pixel. The 3D fingertip localization algorithm adapts a recently developed geodesic extrema extraction method to fingertip detection with the hand parsing algorithm, a novel path-reweighting method and K-means clustering in metric space. The detected 3D fingertip locations are finally used for hand pose estimation with an inverse kinematics solver. Quantitative experiments on synthetic data show the proposed hand pose estimation scheme can accurately capture the natural hand motion. A simulated water-oscillator application is also built to demonstrate the effectiveness of the proposed method in human-computer interaction scenarios.",
"title": ""
},
{
"docid": "274f9e9f20a7ba3b29a5ab939aea68a2",
"text": "Clustering validation is a long standing challenge in the clustering literature. While many validation measures have been developed for evaluating the performance of clustering algorithms, these measures often provide inconsistent information about the clustering performance and the best suitable measures to use in practice remain unknown. This paper thus fills this crucial void by giving an organized study of 16 external validation measures for K-means clustering. Specifically, we first introduce the importance of measure normalization in the evaluation of the clustering performance on data with imbalanced class distributions. We also provide normalization solutions for several measures. In addition, we summarize the major properties of these external measures. These properties can serve as the guidance for the selection of validation measures in different application scenarios. Finally, we reveal the interrelationships among these external measures. By mathematical transformation, we show that some validation measures are equivalent. Also, some measures have consistent validation performances. Most importantly, we provide a guide line to select the most suitable validation measures for K-means clustering.",
"title": ""
},
{
"docid": "cf2c8ab1b22ae1a33e9235a35f942e7e",
"text": "Adversarial attacks against neural networks are a problem of considerable importance, for which effective defenses are not yet readily available. We make progress toward this problem by showing that non-negative weight constraints can be used to improve resistance in specific scenarios. In particular, we show that they can provide an effective defense for binary classification problems with asymmetric cost, such as malware or spam detection. We also show the potential for non-negativity to be helpful to non-binary problems by applying it to image",
"title": ""
},
{
"docid": "5bac6135af1c6014352d6ce5e91ec8d3",
"text": "Acute necrotizing fasciitis (NF) in children is a dangerous illness characterized by progressive necrosis of the skin and subcutaneous tissue. The present study summarizes our recent experience with the treatment of pediatric patients with severe NF. Between 2000 and 2009, eight children suffering from NF were admitted to our department. Four of the children received an active treatment strategy including continuous renal replacement therapy (CRRT), radical debridement, and broad-spectrum antibiotics. Another four children presented at a late stage of illness, and did not complete treatment. Clinical data for these two patient groups were retrospectively analyzed. The four patients that completed CRRT, radical debridement, and a course of broad-spectrum antibiotics were cured without any significant residual morbidity. The other four infants died shortly after admission. Early diagnosis, timely debridement, and aggressive use of broad-spectrum antibiotics are key factors for achieving a satisfactory outcome for cases of acute NF. Early intervention with CRRT to prevent septic shock may also improve patient outcome.",
"title": ""
},
{
"docid": "785b1e2b8cf185c0ffa044d62309c711",
"text": "Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN’s size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1–7.3× convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at https://github.com/IntelLabs/SkimCaffe.",
"title": ""
},
{
"docid": "5ee940efb443ee38eafbba9e0d14bdd2",
"text": "BACKGROUND\nThe stability of biochemical analytes has already been investigated, but results strongly differ depending on parameters, methodologies, and sample storage times. We investigated the stability for many biochemical parameters after different storage times of both whole blood and plasma, in order to define acceptable pre- and postcentrifugation delays in hospital laboratories.\n\n\nMETHODS\nTwenty-four analytes were measured (Modular® Roche analyzer) in plasma obtained from blood collected into lithium heparin gel tubes, after 2-6 hr of storage at room temperature either before (n = 28: stability in whole blood) or after (n = 21: stability in plasma) centrifugation. Variations in concentrations were expressed as mean bias from baseline, using the analytical change limit (ACL%) or the reference change value (RCV%) as acceptance limit.\n\n\nRESULTS\nIn tubes stored before centrifugation, mean plasma concentrations significantly decreased after 3 hr for phosphorus (-6.1% [95% CI: -7.4 to -4.7%]; ACL 4.62%) and lactate dehydrogenase (LDH; -5.7% [95% CI: -7.4 to -4.1%]; ACL 5.17%), and slightly decreased after 6 hr for potassium (-2.9% [95% CI: -5.3 to -0.5%]; ACL 4.13%). In plasma stored after centrifugation, mean concentrations decreased after 6 hr for bicarbonates (-19.7% [95% CI: -22.9 to -16.5%]; ACL 15.4%), and moderately increased after 4 hr for LDH (+6.0% [95% CI: +4.3 to +7.6%]; ACL 5.17%). Based on RCV, all the analytes can be considered stable up to 6 hr, whether before or after centrifugation.\n\n\nCONCLUSION\nThis study proposes acceptable delays for most biochemical tests on lithium heparin gel tubes arriving at the laboratory or needing to be reanalyzed.",
"title": ""
},
{
"docid": "d7c8170b0926cf12ca8dfee1b87ba898",
"text": "The representation of a knowledge graph (KG) in a latent space recently has attracted more and more attention. To this end, some proposed models (e.g., TransE) embed entities and relations of a KG into a \"point\" vector space by optimizing a global loss function which ensures the scores of positive triplets are higher than negative ones. We notice that these models always regard all entities and relations in a same manner and ignore their (un)certainties. In fact, different entities and relations may contain different certainties, which makes identical certainty insufficient for modeling. Therefore, this paper switches to density-based embedding and propose KG2E for explicitly modeling the certainty of entities and relations, which learn the representations of KGs in the space of multi-dimensional Gaussian distributions. Each entity/relation is represented by a Gaussian distribution, where the mean denotes its position and the covariance (currently with diagonal covariance) can properly represent its certainty. In addition, compared with the symmetric measures used in point-based methods, we employ the KL-divergence for scoring triplets, which is a natural asymmetry function for effectively modeling multiple types of relations. We have conducted extensive experiments on link prediction and triplet classification with multiple benchmark datasets (WordNet and Freebase). Our experimental results demonstrate that our method can effectively model the (un)certainties of entities and relations in a KG, and it significantly outperforms state-of-the-art methods (including TransH and TransR).",
"title": ""
},
{
"docid": "e34815efa68cb1b7a269e436c838253d",
"text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).",
"title": ""
},
{
"docid": "1ec395dbe807ff883dab413419ceef56",
"text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.",
"title": ""
},
{
"docid": "e8bf03ec53323bb8271a42c2d4602f62",
"text": "UNLABELLED\nCommunity intervention programmes to reduce cardiovascular disease (CVD) risk factors within urban communities in developing countries are rare. One possible explanation is the difficulty of designing an intervention that corresponds to the local context and culture.\n\n\nOBJECTIVES\nTo understand people's perceptions of health and CVD, and how people prevent CVD in an urban setting in Yogyakarta, Indonesia.\n\n\nMETHODS\nA qualitative study was performed through focus group discussions and individual research interviews. Participants were selected purposively in terms of socio-economic status (SES), lay people, community leaders and government officers. Data were analysed by using content analysis.\n\n\nRESULTS\nSEVEN CATEGORIES WERE IDENTIFIED: (1) heart disease is dangerous, (2) the cause of heart disease, (3) men have no time for health, (4) women are caretakers for health, (5) different information-seeking patterns, (6) the role of community leaders and (7) patterns of lay people's action. Each category consists of sub-categories according to the SES of participants. The main theme that emerged was one of balance and harmony, indicating the necessity of assuring a balance between 'good' and 'bad' habits.\n\n\nCONCLUSIONS\nThe basic concepts of balance and harmony, which differ between low and high SES groups, must be understood when tailoring community interventions to reduce CVD risk factors.",
"title": ""
},
{
"docid": "23d2349831a364e6b77e3c263a8321c8",
"text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …",
"title": ""
},
{
"docid": "4b2d4ac1be5eeec4a7e370dfa768a5af",
"text": "A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-ofthe-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to “ light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004.",
"title": ""
},
{
"docid": "4419d61684dff89f4678afe3b8dc06e0",
"text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.",
"title": ""
},
{
"docid": "9e0ebe084cb9ed489c76dac9741ea08e",
"text": "THIS PAPER OFFERS ten common sense principles that will help project managers define goals, establish checkpoints, schedules, and resource requirements, motivate and empower team members, facilitate communication, and manage conflict.",
"title": ""
},
{
"docid": "5fde7006ec6f7cf4f945b234157e5791",
"text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"title": ""
}
] |
scidocsrr
|
81de69c37064d20dcaa22d985429b484
|
Author Profiling with Word+Character Neural Attention Network
|
[
{
"docid": "40da1f85f7bdc84537a608ce6bec0e17",
"text": "This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.",
"title": ""
}
] |
[
{
"docid": "d8d17aa5e709ebd4dda676eadb531ef3",
"text": "The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.",
"title": ""
},
{
"docid": "e4578f9c8ebe99988528b876b162b65a",
"text": "This paper concerns the form-finding problem for general and symmetric tensegrity structures with shape constraints. A number of different geometries are treated and several fundamental properties of tensegrity structures are identified that simplify the form-finding problem. The concept of a tensegrity invariance (similarity) transformation is defined and it is shown that tensegrity equilibrium is preserved under affine node position transformations. This result provides the basis for a new tensegrity form-finding tool. The generality of the problem formulation makes it suitable for the automated generation of the equations and their derivatives. State-of-the-art numerical algorithms are applied to solve several example problems. Examples are given for tensegrity plates, shell-class symmetric tensegrity structures and structures generated by applying similarity transformation. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "49585da1d2c3102683e73dddb830ba36",
"text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. This paper posits that the knowledge pyramid is too basic and fails to represent reality and presents a revised knowledge pyramid. One key difference is that the revised knowledge pyramid includes knowledge management as an extraction of reality with a focus on organizational learning. The model also posits that newer initiatives such as business and/or customer intelligence are the result of confusion in understanding the traditional knowledge pyramid that is resolved in the revised knowledge pyramid.",
"title": ""
},
{
"docid": "982253c9f0c05e50a070a0b2e762abd7",
"text": "In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs.",
"title": ""
},
{
"docid": "3c876ddb6922c8ac14d619000f121136",
"text": "MANETs are an upcoming technology that is gaining momentum in recent years. Due to their unique characteristics, MANETs are suffering from wide range of security attacks. Wormhole is a common security issue encounter in MANETs routing protocol. A new routing protocol naming extended prime product number (EPPN) based on the hop count model is proposed in this article. Here hop count between source & destination is obtained depending upon the current active route. This hop count model is integrated into AODV protocol. In the proposed scheme firstly the route is selected on the basis of RREP and then hop count model calculates the hop count between source & destination. Finally wormhole DETECTION procedure will be started if the calculated hop count is greater than the received hop count in the route to get out the suspected nodes.",
"title": ""
},
{
"docid": "f92351eac81d6d28c3fd33ea96b75f91",
"text": "There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "128b15fdb3d5a60feacb4de78385db0c",
"text": "Delta measures are a well-established and popular family of authorship attribution methods, especially for literary texts. N-gram tracing is a novel method for authorship attribution designed for very short texts, which has its roots in forensic linguistics. We evaluate the performance of both methods in a series of experiments on English, French and German literary texts, in order to investigate the relationship between authorship attribution accuracy and text length as well as the composition of the comparison corpus. Our results show that, at least in our setting, both methods require relatively long texts and are furthermore highly sensitive to the choice of authors and texts in the comparison corpus.",
"title": ""
},
{
"docid": "9b11423260c2d3d175892f846cecced3",
"text": "Disturbances in fluid and electrolytes are among the most common clinical problems encountered in the intensive care unit (ICU). Recent studies have reported that fluid and electrolyte imbalances are associated with increased morbidity and mortality among critically ill patients. To provide optimal care, health care providers should be familiar with the principles and practice of fluid and electrolyte physiology and pathophysiology. Fluid resuscitation should be aimed at restoration of normal hemodynamics and tissue perfusion. Early goal-directed therapy has been shown to be effective in patients with severe sepsis or septic shock. On the other hand, liberal fluid administration is associated with adverse outcomes such as prolonged stay in the ICU, higher cost of care, and increased mortality. Development of hyponatremia in critically ill patients is associated with disturbances in the renal mechanism of urinary dilution. Removal of nonosmotic stimuli for vasopressin secretion, judicious use of hypertonic saline, and close monitoring of plasma and urine electrolytes are essential components of therapy. Hypernatremia is associated with cellular dehydration and central nervous system damage. Water deficit should be corrected with hypotonic fluid, and ongoing water loss should be taken into account. Cardiac manifestations should be identified and treated before initiating stepwise diagnostic evaluation of dyskalemias. Divalent ion deficiencies such as hypocalcemia, hypomagnesemia and hypophosphatemia should be identified and corrected, since they are associated with increased adverse events among critically ill patients.",
"title": ""
},
{
"docid": "9cb567317559ada8baec5b6a611e68d0",
"text": "Fungal bioactive polysaccharides deriving mainly from the Basidiomycetes family (and some from the Ascomycetes) and medicinal mushrooms have been well known and widely used in far Asia as part of traditional diet and medicine, and in the last decades have been the core of intense research for the understanding and the utilization of their medicinal properties in naturally produced pharmaceuticals. In fact, some of these biopolymers (mainly β-glucans or heteropolysaccharides) have already made their way to the market as antitumor, immunostimulating or prophylactic drugs. The fact that many of these biopolymers are produced by edible mushrooms makes them also very good candidates for the formulation of novel functional foods and nutraceuticals without any serious safety concerns, in order to make use of their immunomodulating, anticancer, antimicrobial, hypocholesterolemic, hypoglycemic and health-promoting properties. This article summarizes the most important properties and applications of bioactive fungal polysaccharides and discusses the latest developments on the utilization of these biopolymers in human nutrition.",
"title": ""
},
{
"docid": "a3d4bcdd37efd0ae1c1619a3db5356fb",
"text": "Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). Despite its pervasiveness, the exact reasons for BatchNorm’s effectiveness are still poorly understood. The popular belief is that this effectiveness stems from controlling the change of the layers’ input distributions during training to reduce the so-called “internal covariate shift”. In this work, we demonstrate that such distributional stability of layer inputs has little to do with the success of BatchNorm. Instead, we uncover a more fundamental impact of BatchNorm on the training process: it makes the optimization landscape significantly smoother. This smoothness induces a more predictive and stable behavior of the gradients, allowing for faster training. These findings bring us closer to a true understanding of our DNN training toolkit.",
"title": ""
},
{
"docid": "abaf3d722acb6a641a481cb5324bc765",
"text": "Numerous studies have demonstrated a strong connection between the experience of stigma and the well-being of the stigmatized. But in the area of mental illness there has been controversy surrounding the magnitude and duration of the effects of labeling and stigma. One of the arguments that has been used to downplay the importance of these factors is the substantial body of evidence suggesting that labeling leads to positive effects through mental health treatment. However, as Rosenfield (1997) points out, labeling can simultaneously induce both positive consequences through treatment and negative consequences through stigma. In this study we test whether stigma has enduring effects on well-being by interviewing 84 men with dual diagnoses of mental disorder and substance abuse at two points in time--at entry into treatment, when they were addicted to drugs and had many psychiatric symptoms and then again after a year of treatment, when they were far less symptomatic and largely drug- and alcohol-free. We found a relatively strong and enduring effect of stigma on well-being. This finding indicates that stigma continues to complicate the lives of the stigmatized even as treatment improves their symptoms and functioning. It follows that if health professionals want to maximize the well-being of the people they treat, they must address stigma as a separate and important factor in its own right.",
"title": ""
},
{
"docid": "9c857daee24f793816f1cee596e80912",
"text": "Introduction Since the introduction of a new UK Ethics Committee Authority (UKECA) in 2004 and the setting up of the Central Office for Research Ethics Committees (COREC), research proposals have come under greater scrutiny than ever before. The era of self-regulation in UK research ethics has ended (Kerrison and Pollock, 2005). The UKECA recognise various committees throughout the UK that can approve proposals for research in NHS facilities (National Patient Safety Agency, 2007), and the scope of research for which approval must be sought is defined by the National Research Ethics Service, which has superceded COREC. Guidance on sample size (Central Office for Research Ethics Committees, 2007: 23) requires that 'the number should be sufficient to achieve worthwhile results, but should not be so high as to involve unnecessary recruitment and burdens for participants'. It also suggests that formal sample estimation size should be based on the primary outcome, and that if there is more than one outcome then the largest sample size should be chosen. Sample size is a function of three factors – the alpha level, beta level and magnitude of the difference (effect size) hypothesised. Referring to the expected size of effect, COREC (2007: 23) guidance states that 'it is important that the difference is not unrealistically high, as this could lead to an underestimate of the required sample size'. In this paper, issues of alpha, beta and effect size will be considered from a practical perspective. A freely-available statistical software package called GPower (Buchner et al, 1997) will be used to illustrate concepts and provide practical assistance to novitiate researchers and members of research ethics committees. There are a wide range of freely available statistical software packages, such as PS (Dupont and Plummer, 1997) and STPLAN (Brown et al, 2000). Each has features worth exploring, but GPower was chosen because of its ease of use and the wide range of study designs for which it caters. Using GPower, sample size and power can be estimated or checked by those with relatively little technical knowledge of statistics. Alpha and beta errors and power Researchers begin with a research hypothesis – a 'hunch' about the way that the world might be. For example, that treatment A is better than treatment B. There are logical reasons why this can never be demonstrated as absolutely true, but evidence that it may or may not be true can be obtained by …",
"title": ""
},
{
"docid": "20746cd01ff3b67b204cd2453f1d8ecb",
"text": "Quantification of human group-behavior has so far defied an empirical, falsifiable approach. This is due to tremendous difficulties in data acquisition of social systems. Massive multiplayer online games (MMOG) provide a fascinating new way of observing hundreds of thousands of simultaneously socially interacting individuals engaged in virtual economic activities. We have compiled a data set consisting of practically all actions of all players over a period of 3 years from a MMOG played by 300,000 people. This largescale data set of a socio-economic unit contains all social and economic data from a single and coherent source. Players have to generate a virtual income through economic activities to ‘survive’ and are typically engaged in a multitude of social activities offered within the game. Our analysis of high-frequency log files focuses on three types of social networks, and tests a series of social-dynamics hypotheses. In particular we study the structure and dynamics of friend-, enemyand communication networks. We find striking differences in topological structure between positive (friend) and negative (enemy) tie networks. All networks confirm the recently observed phenomenon of network densification. We propose two approximate social laws in communication networks, the first expressing betweenness centrality as the inverse square of the overlap, the second relating communication strength to the cube of the overlap. These empirical laws provide strong quantitative evidence for the Weak ties hypothesis of Granovetter. Further, the analysis of triad significance profiles validates well-established assertions from social balance theory. We find overrepresentation (underrepresentation) of complete (incomplete) triads in networks of positive ties, and vice versa for networks of negative ties. Empirical transition probabilities between triad classes provide evidence for triadic closure with extraordinarily high precision. For the first time we provide empirical results for large-scale networks of negative social ties. Whenever possible we compare our findings with data from non-virtual human groups and provide further evidence that online game communities serve as a valid model for a wide class of human societies. With this setup we demonstrate the feasibility for establishing a ‘socio-economic laboratory’ which allows to operate at levels of precision approaching those of the natural sciences. All data used in this study is fully anonymized; the authors have the written consent to publish from the legal department of the Medical University of Vienna. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ee4a277bf113c62904f0fccd3fbf56d5",
"text": "Given the overwhelming quantity of information available from the environment, how do young learners know what to learn about and what to ignore? We found that 11-month-old infants (N = 110) used violations of prior expectations as special opportunities for learning. The infants were shown events that violated expectations about object behavior or events that were nearly identical but did not violate expectations. The sight of an object that violated expectations enhanced learning and promoted information-seeking behaviors; specifically, infants learned more effectively about objects that committed violations, explored those objects more, and engaged in hypothesis-testing behaviors that reflected the particular kind of violation seen. Thus, early in life, expectancy violations offer a wedge into the problem of what to learn.",
"title": ""
},
{
"docid": "868252357a9542cb97b3da01d6d3446f",
"text": "This article introduces and summarizes The Ever-Present Origin, the magnum opus of cultural historian and evolutionary philosopher Jean Gebser, largely in his own words. According to Gebser, human consciousness underwent a series of mutations each of which has enriched reality by a new (qualitative) dimension. At present humanity is again undergoing such a mutation: this time from the mental, perspectival structure of consciousness to the integral, aperspectival structure or, using the terminology of Sri Aurobindo, from mind to supermind. The integrality of this consciousness consists in part in its ability to integrate the preceding consciousness structures, rather than suppressing them (as the mental structure does) and hence being adversely affected by them. The article concludes with a brief account of the Mother’s personal experience of this mutation.",
"title": ""
},
{
"docid": "6daeb971164be1774869f73c7dd77d2e",
"text": "This paper offers a review of the artificial intelligence (AI) algorithms and applications presently being used for smart machine tools. These AI methods can be classified as learning algorithms (deep, meta-, unsupervised, supervised, and reinforcement learning) for diagnosis and detection of faults in mechanical components and AI technique applications in smart machine tools including intelligent manufacturing, cyber-physical systems, mechanical components prognosis, and smart sensors. A diagram of the architecture of AI schemes used for smart machine tools has been included. The respective strengths and weaknesses of the methods, as well as the challenges and future trends in AI schemes, are discussed. In the future, we will propose several AI approaches to tackle mechanical components as well as addressing different AI algorithms to deal with smart machine tools and the acquisition of accurate results.",
"title": ""
},
{
"docid": "66b7ed8c1d20bceafb0a1a4194cd91e8",
"text": "In this paper a novel watermarking scheme for image authentication and recovery is presented. The algorithm can detect modified regions in images and is able to recover a good approximation of the original content of the tampered regions. For this purpose, two different watermarks have been used: a semi-fragile watermark for image authentication and a robust watermark for image recovery, both embedded in the Discrete Wavelet Transform domain. The proposed method achieves good image quality with mean Peak Signal-to-Noise Ratio values of the watermarked images of 42 dB and identifies image tampering of up to 20% of the original image.",
"title": ""
},
{
"docid": "3e064a2a984998fe07dde451325505bb",
"text": "Whereas some educational designers believe that students should learn new concepts through explorative problem solving within dedicated environments that constrain key parameters of their search and then support their progressive appropriation of empowering disciplinary forms, others are critical of the ultimate efficacy of this discovery-based pedagogical philosophy, citing an inherent structural challenge of students constructing historically achieved conceptual structures from their ingenuous notions. This special issue presents six educational research projects that, while adhering to principles of discovery-based learning, are motivated by complementary philosophical stances and theoretical constructs. The editorial introduction frames the set of projects as collectively exemplifying the viability and breadth of discovery-based learning, even as these projects: (a) put to work a span of design heuristics, such as productive failure, surfacing implicit know-how, playing epistemic games, problem posing, or participatory simulation activities; (b) vary in their target content and skills, including building electric circuits, solving algebra problems, driving safely in traffic jams, and performing martial-arts maneuvers; and (c) employ different media, such as interactive computer-based modules for constructing models of scientific phenomena or mathematical problem situations, networked classroom collective ‘‘video games,’’ and intercorporeal master–student training practices. The authors of these papers consider the potential generativity of their design heuristics across domains and contexts.",
"title": ""
},
{
"docid": "b1b56020802d11d1f5b2badb177b06b9",
"text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.",
"title": ""
}
] |
scidocsrr
|
9088f714f71e27254a01664a10817014
|
Reassessing Melanonychia Striata in Phototypes IV, V, and VI Patients.
|
[
{
"docid": "b1c0351af515090e418d59a4b553b866",
"text": "BACKGROUND\nThe dermatoscopic examination of the nail plate has been recently introduced for the evaluation of pigmented nail lesions. There is, however, no evidence that this technique improves diagnostic accuracy of in situ melanoma.\n\n\nOBJECTIVE\nTo establish and validate patterns for intraoperative dermatoscopy of the nail matrix.\n\n\nMETHODS\nIntraoperative nail matrix dermatoscopy was performed in 100 consecutive bands of longitudinal melanonychia that were excised and submitted to histopathologic examination.\n\n\nRESULTS\nWe identified 4 dermatoscopic patterns: regular gray pattern (hypermelanosis), regular brown pattern (benign melanocytic hyperplasia), regular brown pattern with globules or blotch (melanocytic nevi), and irregular pattern (melanoma).\n\n\nLIMITATIONS\nNail matrix dermatoscopy is an invasive procedure that can not routinely be performed in all cases of melanonychia.\n\n\nCONCLUSION\nThe patterns described present high sensitivity and specificity for intraoperative differential diagnosis of pigmented nail lesions.",
"title": ""
}
] |
[
{
"docid": "b3ac28a94719a21abf6ebb719c2933cd",
"text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.09.110 ⇑ Corresponding author. Tel.: +86 (0)21 2023 1668; E-mail address: liulong@tongji.edu.cn (L. Liu). Failure mode and effects analysis (FMEA) is a methodology to evaluate a system, design, process or service for possible ways in which failures (problems, errors, etc.) can occur. The two most important issues of FMEA are the acquirement of FMEA team members’ diversity opinions and the determination of risk priorities of the failure modes that have been identified. First, the FMEA team often demonstrates different opinions and knowledge from one team member to another and produces different types of assessment information because of its cross-functional and multidisciplinary nature. These different types of information are very hard to incorporate into the FMEA by the traditional model and fuzzy logic approach. Second, the traditional FMEA determines the risk priorities of failure modes using the risk priority numbers (RPNs) by multiplying the scores of the risk factors like the occurrence (O), severity (S) and detection (D) of each failure mode. The method has been criticized to have several shortcomings. In this paper, we present an FMEA using the fuzzy evidential reasoning (FER) approach and grey theory to solve the two problems and improve the effectiveness of the traditional FMEA. As is illustrated by the numerical example, the proposed FMEA can well capture FMEA team members’ diversity opinions and prioritize failure modes under different types of uncertainties. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "64ca99b23c0f901237e7f03aa214bed5",
"text": "and high computational costs are being tackled. Researchers in academic settings as well as in startup companies such as Deep Genomics, launched July 22, 2015, by some of the authors of DeepBind, will increasingly apply deep learning to genome analysis and precision medicine. The goal is to predict the effect of genetic variants— both naturally occurring and introduced by genome editing—on a cell’s regulatory landscape and how this in turn affects disease development. Nicole Rusk ❯❯Deep learning",
"title": ""
},
{
"docid": "106086a4b63a5bfe0554f36c9feff5f5",
"text": "It seems uncontroversial that providing feedback after a test, in the form of the correct answer, enhances learning. In real-world educational situations, however, the time available for learning is often constrained-and feedback takes time. We report an experiment in which total time for learning was fixed, thereby creating a trade-off between spending time receiving feedback and spending time on other learning activities. Our results suggest that providing feedback is not universally beneficial. Indeed, under some circumstances, taking time to provide feedback can have a negative net effect on learning. We also found that learners appear to have some insight about the costs of feedback; when they were allowed to control feedback, they often skipped unnecessary feedback in favor of additional retrieval attempts, and they benefited from doing so. These results underscore the importance of considering the costs and benefits of interventions designed to enhance learning.",
"title": ""
},
{
"docid": "5eed0c6f114382d868cd841c7b5d9986",
"text": "Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance.",
"title": ""
},
{
"docid": "84e77b523fa0285829ccbfa0758d8bff",
"text": "Unsupervised word representations have demonstrated improvements in predictive generalization on various NLP tasks. Most of the existing models are in fact good at capturing the relatedness among words rather than their “genuine” similarity because the context representations are often represented by a sum (or an average) of the neighbor’s embeddings, which simplifies the computation but ignores an important fact that the meaning of a word is determined by its context, reflecting not only the surrounding words but also the rules used to combine them (i.e. compositionality). On the other hand, much effort has been devoted to learning a singleprototype representation per word, which is problematic because many words are polysemous, and a single-prototype model is incapable of capturing phenomena of homonymy and polysemy. We present a neural network architecture to jointly learn word embeddings and context representations from large data sets. The explicitly produced context representations are further used to learn context-specific and multiprototype word embeddings. Our embeddings were evaluated on several NLP tasks, and the experimental results demonstrated the proposed model outperformed other competitors and is applicable to intrinsically “character-based” languages. Introduction and Motivation Much recent research has been devoted to deep learning algorithms which achieved impressive results on various natural language processing (NLP) tasks. The best results obtained on supervised learning tasks involve an unsupervised learning phase, usually in an unsupervised pre-training step to learn distributed word representations (also known as word embeddings). Such semi-supervised learning strategy has been empirically proven to be successful by using unlabeled data to supplement the supervised models for better generalization (Collobert et al. 2011; Socher et al. 2011; dos Santos and Zadrozny 2014; Zheng, Chen, and Xu 2013; Pei, Ge, and Chang 2014). Due to the importance of unsupervised pre-training in deep neural network methods, many models have been proposed to learn word embeddings. The two main model families are: (a) predicting or scoring the target word based on its local context, such as the neural probabilistic language Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. model (NNLM) (Bengio et al. 2003), C&W (Collobert et al. 2011), and continuous bag-of-words (CBOW) (Mikolov et al. 2013a); (b) using a word to predict its surrounding words, such as the Skip-gram (Mikolov et al. 2013a) as well as its extensions, multi-sense Skip-gram (MSSG) (Neelakantan et al. 2014) and proximity-ambiguity sensitive (PAS) Skipgram (Qiu et al. 2014). However, they suffer one or both of the following significant drawbacks. First, the context is often represented by a sum (or an average) of the surrounding words’ feature vectors. Those context representations may not represent word meanings well because the meaning of a word is affected by its adjacent words and the rules to combine them. Each context word in general does not contribute equally to the meaning of the target word, and thus the simple sum or average operation can not capture this compositionality. Second, most of the existing methods create a singleprototype embedding for each word. This single-prototype representation is problematic because many words are intrinsically polysemous, and a single-prototype model is incapable of capturing phenomena of homonymy and polysemy. It is also important to be reminded that languages differ in the degree of polysemy they exhibit. For example, Chinese with its (almost) complete lack of morphological marking for parts of speech certainly exhibits a higher degree of polysemy than English (Packard 2004). Recent models on learning multiple representations per word (Reisinger and Mooney 2010; Huang et al. 2012) generally work as follows: for each word, first cluster its contexts into a set of clusters, and then derive multiple representations (each for a cluster) of the word from the clusters of similar contexts. In these models, each context is represented by a vector composed of the occurrences of its neighboring words or a weighted average of the surrounding words’ vector. Such context definitions neglect the relative order of words in the context window, which impairs the quality of the multi-prototype representations derived by the clustering based on such context representations. In fact, the order of words does matter to the meaning of those they form. Besides, those models set the equal number of prototypes for each word, although different numbers were tested to determine the optimal number of prototypes. We argue that the degree of polysemy in polysemous words depends on the number of distinct contexts in which they occur, especially for the “character-based” languages (e.g. Chinese and Japanese). When a word appears in more different linguistic contexts, it may carry more meanings, and greater number of prototypes should be created for that word. It is unnecessary for rare words to learn several prototypes, whereas for most common words, if the number of word senses is greater than that of prototypes, certain prototype could be affected by the contexts associated with different meanings, and the learned prototype might not represent any one of the meanings well as it is influenced by several (and different) meanings of that word. We present a novel neural network architecture to learn multi-prototype word/character embeddings from large data sets, in which the context representations are produced by a convolutional layer, designed to capture the compositionality of context words or characters. This architecture allows us to take advantage of the better trained context representations and to learn multi-prototype word/character embeddings accordingly. The number of prototypes for each word/character can be different and selected automatically. Experimental results show that the embeddings learned by our model outperform competitors on the several NLP tasks across different languages by transferring the unsupervised representations into the supervised models. Context-Specific Vector Model Many methods were proposed to learn distributed word representations from large unlabeled texts, but the questions still remain as to how meaning is produced by such unsupervised methods, and how the resulting word vectors represent that meaning. We describe here a neural networkbased model, named CSV (Context-Specific Vector), which can generate the context representation of a word/character, and learn the word/character vector carrying the meaning inferred by that context. The proposed network architecture contains a convolutional layer that is designed to produce the refined context representations reflecting the order of their constituents and the rules to combine them. The better generated context representations are used to learn contextspecific multi-prototype word/character embeddings by the sense induction with a simple “winner-takes-all” strategy. The Neural Network Architecture The network architecture is shown in Figure 1. From now on, a term “type” is used to refer to word or character, depending on which language is considered (e.g. word in English or character in Chinese). Each type is associated with a global vector and multiple sense vectors. The input to the network is a type’s context window, and the convolutional layer produces the representation (or vector) for that context. One of the type’s sense vectors is chosen by how well the sense fits into the context, and the network is trained to differentiate the selected sense vector from others. We use a window approach that assumes the meaning of a type depends mainly on its surrounding types. The types (except the target) in the window of size w (a hyper parameter) are fed into the network as indices that are used by a lookup operation to transform types into their global vectors. We consider a fixed-sized type dictionary D. The type’s global vectors are stored in a matrixM ∈ Rd×|D|, where d is the dimensionality of the vector space (a hyper-parameter to be chosen) and |D| is the size of the dictionary. Convolution • • • g(t−w/2) • • • Word/Character Context",
"title": ""
},
{
"docid": "e96c9bdd3f5e9710f7264cbbe02738a7",
"text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.",
"title": ""
},
{
"docid": "a67abb94bee6116bdc81cb2a7f120e10",
"text": "Why should wait for some days to get or receive the cybersecurity systems for human cognition augmentation book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This cybersecurity systems for human cognition augmentation is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "6acf185db311307896beafe7bcb4a366",
"text": "Spirituality has mostly been studied in psychology as implied in the process of overcoming adversity, being triggered by negative experiences, and providing positive outcomes. By reversing this pathway, we investigated whether spirituality may also be triggered by self-transcendent positive emotions, which are elicited by stimuli appraised as demonstrating higher good and beauty. In two studies, elevation and/or admiration were induced using different methods. These emotions were compared to two control groups, a neutral state and a positive emotion (mirth). Self-transcendent positive emotions increased participants' spirituality (Studies 1 and 2), especially for the non-religious participants (Study 1). Two basic world assumptions, i.e., belief in life as meaningful (Study 1) and in the benevolence of others and the world (Study 2) mediated the effect of these emotions on spirituality. Spirituality should be understood not only as a coping strategy, but also as an upward spiralling pathway to and from self-transcendent positive emotions.",
"title": ""
},
{
"docid": "f1ef048caa1ca9f3f35d4de79e0eb3b1",
"text": "Artists and other creators naturally draw inspiration for new works based on previous artifacts in their own fields. Some of the most profound examples of creativity, however, also transform the field by redefining and combining rules from other domains. In procedural content generation for games, representations and constraints are typically modeled on the target domain. In contrast, this paper examines representing and generating video game levels through a representation called functional scaffolding for musical composition originally designed to procedurally compose music. Viewing music as a means to re-frame the way we think about, represent and computationally design video game levels, this paper presents a method for deconstructing game levels into multiple “instruments” or “voices,” wherein each voice represents a tile type. We then use functional scaffolding to automatically generate “accompaniment” to individual voices. Complete new levels are subsequently synthesized from generated voices. Our proof-of-concept experiments showcase that music is a rich metaphor for representing naturalistic, yet unconventional and playable, levels in the classic platform game Super Mario Bros, demonstrating the capacity of our approach for potential applications in computational creativity and game design.",
"title": ""
},
{
"docid": "eced014d1a6b3b20ab41172be3de3518",
"text": "Driving intention recognition and trajectory prediction of moving vehicles are two important requirements of future advanced driver assistance systems (ADAS) for urban intersections. In this paper, we present a consistent framework for solving these two problems. The key idea is to model the spatio-temporal dependencies of traffic situations with a two-dimensional Gaussian process regression. With this representation the driving intention can be recognized by evaluating the data likelihood for each individual regression model. For the trajectory prediction purpose, we transform these regression models into the corresponding dynamical models and combine them with Unscented Kalman Filters (UKF) to overcome the non-linear issue. We evaluate our framework with data collected from real traffic scenarios and show that our approach can be used for recognition of different driving intentions and for long-term trajectory prediction of traffic situations occurring at urban intersections.",
"title": ""
},
{
"docid": "bb3e1657ce46c4da90f4ac1ef07aa918",
"text": "Credit is a widely used tool to finance personal and corporate projects. The risk of default has motivated lenders to use a credit scoring system, which helps them make more efficient decisions about whom to extend credit. Credit scores serve as a financial user model, and have been traditionally computed from the user’s past financial history. As a result, people without any prior financial history might be excluded from the credit system. In this paper we present MobiScore, an approach to build a model of the user’s financial risk from mobile phone usage data, which previous work has shown to convey information about e.g. personality and socioeconomic status. MobiScore could replace traditional credit scores when no financial history is available, providing credit access to currently excluded population sectors, or be used as a complementary source of information to improve traditional finance-based scores. We validate the proposed approach using real data from a telecommunications operator and a financial institution in a Latin American country, resulting in an accurate model of default comparable to traditional credit scoring techniques.",
"title": ""
},
{
"docid": "da3650998a4bd6ea31467daa631d0e05",
"text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "54447ac4dfe0d48dc82195f5527cd1a8",
"text": "OpenGM is a C++ template library for defining discrete graphical models and performing inference on these models, using a wide range of state-of-the-art algorithms. No restrictions are imposed on the factor graph to allow for higher-order factors and arbitrary neighborhood structures. Large models with repetitive structure are handled efficiently because (i) functions that occur repeatedly need to be stored only once, and (ii) distinct functions can be implemented differently, using different encodings alongside each other in the same model. Several parametric functions (e.g. metrics), sparse and dense value tables are provided and so is an interface for custom C++ code. Algorithms are separated by design from the representation of graphical models and are easily exchangeable. OpenGM, its algorithms, HDF5 file format and command line tools are modular and extendible.",
"title": ""
},
{
"docid": "fe9dd4b6af83b327b8af7734567dcea6",
"text": "When content consumers explicitly judge content positively, we consider them to be engaged. Unfortunately, explicit user evaluations are difficult to collect, as they require user effort. Therefore, we propose to use device interactions as implicit feedback to detect engagement.\n We assess the usefulness of swipe interactions on tablets for predicting engagement and make the comparison with using traditional features based on time spent.\n We gathered two unique datasets of more than 250,000 swipes, 100,000 unique article visits, and over 35,000 explicitly judged news articles by modifying two commonly used tablet apps of two newspapers. We tracked all device interactions of 407 experiment participants during one month of habitual news reading.\n We employed a behavioral metric as a proxy for engagement, because our analysis needed to be scalable to many users, and scanning behavior required us to allow users to indicate engagement quickly.\n We point out the importance of taking into account content ordering, report the most predictive features, zoom in on briefly read content and on the most frequently read articles.\n Our findings demonstrate that fine-grained tablet interactions are useful indicators of engagement for newsreaders on tablets. The best features successfully combine both time-based aspects and swipe interactions.",
"title": ""
},
{
"docid": "32e2c444bfbe7c85ea600c2b91bf2370",
"text": "The consumption of caffeine (an adenosine receptor antagonist) correlates inversely with depression and memory deterioration, and adenosine A2A receptor (A2AR) antagonists emerge as candidate therapeutic targets because they control aberrant synaptic plasticity and afford neuroprotection. Therefore we tested the ability of A2AR to control the behavioral, electrophysiological, and neurochemical modifications caused by chronic unpredictable stress (CUS), which alters hippocampal circuits, dampens mood and memory performance, and enhances susceptibility to depression. CUS for 3 wk in adult mice induced anxiogenic and helpless-like behavior and decreased memory performance. These behavioral changes were accompanied by synaptic alterations, typified by a decrease in synaptic plasticity and a reduced density of synaptic proteins (synaptosomal-associated protein 25, syntaxin, and vesicular glutamate transporter type 1), together with an increased density of A2AR in glutamatergic terminals in the hippocampus. Except for anxiety, for which results were mixed, CUS-induced behavioral and synaptic alterations were prevented by (i) caffeine (1 g/L in the drinking water, starting 3 wk before and continued throughout CUS); (ii) the selective A2AR antagonist KW6002 (3 mg/kg, p.o.); (iii) global A2AR deletion; and (iv) selective A2AR deletion in forebrain neurons. Notably, A2AR blockade was not only prophylactic but also therapeutically efficacious, because a 3-wk treatment with the A2AR antagonist SCH58261 (0.1 mg/kg, i.p.) reversed the mood and synaptic dysfunction caused by CUS. These results herald a key role for synaptic A2AR in the control of chronic stress-induced modifications and suggest A2AR as candidate targets to alleviate the consequences of chronic stress on brain function.",
"title": ""
},
{
"docid": "006515574bf1f690818465200d43c4ba",
"text": "Although the concept of school engagement figures prominently in most school dropout theories, there has been little empirical research conducted on its nature and course and, more importantly, the association with dropout. Information on the natural development of school engagement would greatly benefit those interested in preventing student alienation during adolescence. Using a longitudinal sample of 11,827 French-Canadian high school students, we tested behavioral, affective, cognitive indices of engagement both separately and as a global construct. We then assessed their contribution as prospective predictors of school dropout using factor analysis and structural equation modeling. Global engagement reliably predicted school dropout. Among its three specific dimensions, only behavioral engagement made a significant contribution in the prediction equation. Our findings confirm the robustness of the overall multidimensional construct of school engagement, which reflects both cognitive and psychosocial characteristics, and underscore the importance attributed to basic participation and compliance issues in reliably estimating risk of not completing basic schooling during adolescence.",
"title": ""
},
{
"docid": "6b04721c0fc7135ddd0fdf76a9cfdd79",
"text": "Functional magnetic resonance imaging (fMRI) was used to compare brain activity during the retrieval of coarse- and fine-grained spatial details and episodic details associated with a familiar environment. Long-time Toronto residents compared pairs of landmarks based on their absolute geographic locations (requiring either coarse or fine discriminations) or based on previous visits to those landmarks (requiring episodic details). An ROI analysis of the hippocampus showed that all three conditions activated the hippocampus bilaterally. Fine-grained spatial judgments recruited an additional region of the right posterior hippocampus, while episodic judgments recruited an additional region of the right anterior hippocampus, and a more extensive region along the length of the left hippocampus. To examine whole-brain patterns of activity, Partial Least Squares (PLS) analysis was used to identify sets of brain regions whose activity covaried with the three conditions. All three comparison judgments recruited the default mode network including the posterior cingulate/retrosplenial cortex, middle frontal gyrus, hippocampus, and precuneus. Fine-grained spatial judgments also recruited additional regions of the precuneus, parahippocampal cortex and the supramarginal gyrus. Episodic judgments recruited the posterior cingulate and medial frontal lobes as well as the angular gyrus. These results are discussed in terms of their implications for theories of hippocampal function and spatial and episodic memory.",
"title": ""
},
{
"docid": "68c7509ec0261b1ddccef7e3ad855629",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "d34d8dd7ba59741bb5e28bba3e870ac4",
"text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.",
"title": ""
},
{
"docid": "0678581b45854e8903c0812a25fd9ad1",
"text": "In this study we explored the relationship between narcissism and the individual's use of personal pronouns during extemporaneous monologues. The subjects, 24 males and 24 females, were asked to talk for approximately 5 minutes on any topic they chose. Following the monologues the subjects were administered the Narcissistic Personality Inventory, the Eysenck Personality Questionnaire, and the Rotter Internal-External Locus of Control Scale. The monologues were tape-recorded and later transcribed and analyzed for the subjects' use of personal pronouns. As hypothesized, individuals who scored higher on narcissism tended to use more first person singular pronouns and fewer first person plural pronouns. Discriminant validity for the relationship between narcissism and first person pronoun usage was exhibited in that narcissism did not show a relationship with subjects' use of second and third person pronouns, nor did the personality variables of extraversion, neuroticism, or locus of control exhibit any relationship with the subjects' personal pronoun usage.",
"title": ""
}
] |
scidocsrr
|
93eb45ca07761256c9530630d1065c47
|
IsoRankN: spectral methods for global alignment of multiple protein networks
|
[
{
"docid": "ce501e6b012aa9356b59842d50ecf9b6",
"text": "We describe an algorithm, IsoRank, for global alignment of two protein-protein interaction (PPI) networks. IsoRank aims to maximize the overall match between the two networks; in contrast, much of previous work has focused on the local alignment problem— identifying many possible alignments, each corresponding to a local region of similarity. IsoRank is guided by the intuition that a protein should be matched with a protein in the other network if and only if the neighbors of the two proteins can also be well matched. We encode this intuition as an eigenvalue problem, in a manner analogous to Google’s PageRank method. We use IsoRank to compute the first known global alignment between the S. cerevisiae and D. melanogaster PPI networks. The common subgraph has 1420 edges and describes conserved functional components between the two species. Comparisons of our results with those of a well-known algorithm for local network alignment indicate that the globally optimized alignment resolves ambiguity introduced by multiple local alignments. Finally, we interpret the results of global alignment to identify functional orthologs between yeast and fly; our functional ortholog prediction method is much simpler than a recently proposed approach and yet provides results that are more comprehensive.",
"title": ""
}
] |
[
{
"docid": "1719ad98795f32a55f4e920e075ee798",
"text": "BACKGROUND\nUrinary tract infections (UTIs) are one of main health problems caused by many microorganisms, including uropathogenic Escherichia coli (UPEC). UPEC strains are the most frequent pathogens responsible for 85% and 50% of community and hospital acquired UTIs, respectively. UPEC strains have special virulence factors, including type 1 fimbriae, which can result in worsening of UTIs.\n\n\nOBJECTIVES\nThis study was performed to detect type 1 fimbriae (the FimH gene) among UPEC strains by molecular method.\n\n\nMATERIALS AND METHODS\nA total of 140 isolated E. coli strains from patients with UTI were identified using biochemical tests and then evaluated for the FimH gene by polymerase chain reaction (PCR) analysis.\n\n\nRESULTS\nThe UPEC isolates were identified using biochemical tests and were screened by PCR. The fimH gene was amplified using specific primers and showed a band about 164 bp. The FimH gene was found in 130 isolates (92.8%) of the UPEC strains. Of 130 isolates positive for the FimH gene, 62 (47.7%) and 68 (52.3%) belonged to hospitalized patients and outpatients, respectively.\n\n\nCONCLUSIONS\nThe results of this study indicated that more than 90% of E. coli isolates harbored the FimH gene. The high binding ability of FimH could result in the increased pathogenicity of E. coli; thus, FimH could be used as a possible diagnostic marker and/or vaccine candidate.",
"title": ""
},
{
"docid": "c035b514ee694df3179363296ff48e75",
"text": "A new microcrack-based continuous damage model is developed to describe the behavior of brittle geomaterials under compression dominated stress ®elds. The induced damage is represented by a second rank tensor, which re ̄ects density and orientation of microcracks. The damage evolution law is related to the propagation condition of microcracks. Based on micromechanical analyses of sliding wing cracks, the actual microcrack distributions are replaced by an equivalent set of cracks subjected to a macroscopic local tensile stress. The principles of the linear fracture mechanics are used to develop a suitable macroscopic propagation criterion. The onset of microcrack coalescence leading to localization phenomenon and softening behavior is included by using a critical crack length. The constitutive equations are developed by considering that microcrack growth induces an added material ̄exibility. The eective elastic compliance of damaged material is obtained from the de®nition of a particular Gibbs free energy function. Irreversible damage-related strains due to residual opening of microcracks after unloading are also taken into account. The resulting constitutive equations can be arranged to reveal the physical meaning of each model parameter and to determine its value from standard laboratory tests. An explicit expression for the macroscopic eective constitutive tensor (compliance or stiness) makes it possible, in principal, to determine the critical damage intensity at which the localization condition is satis®ed. The proposed model is applied to two typical brittle rocks (a French granite and Tennessee marble). Comparison between test data and numerical simulations show that the proposed model is able to describe main features of mechanical behaviors observed in brittle geomaterials under compressive stresses. Ó 2000 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "01d34357d5b8dbf4b89d3f8683f6fc58",
"text": "Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.",
"title": ""
},
{
"docid": "8a8edb63c041a01cbb887cd526b97eb0",
"text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.",
"title": ""
},
{
"docid": "ce6e5532c49b02988588f2ac39724558",
"text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.",
"title": ""
},
{
"docid": "6f0d9f383c0142b43ea440e6efb2a59a",
"text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.",
"title": ""
},
{
"docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f",
"text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.",
"title": ""
},
{
"docid": "4d3ca12b25de97da5ec6f9b0989d7109",
"text": "In a context where personal mobility accounts for about two thirds of the total transportation energy use, assessing an individual’s personal contribution to the emissions of a city becomes highly valuable. Prior efforts in this direction have resulted in web-based CO2 emissions calculators, smartphonebased applications, and wearable sensors that detect a user’s transportation modes. Yet, high energy consumption and had-hoc sensors have limited the potential adoption of these methodologies. In this technical report we outline an approach that could make it possible to assess the individual carbon footprint of an unlimited number of people. Our application can be run on standard smartphones for long periods of time and can operate transparently. Given that we make use of an existing platform (smartphones) that is widely adopted, our method has the potential of unprecedented data collection of mobility patterns. Our method estimates in real-time the CO2 emissions using inertial information gathered from mobile phone sensors. In particular, an algorithm automatically classifies the user’s transportation mode into eight classes using a decision tree. The algorithm is trained on features computed from the Fast Fourier Transform (FFT) coefficients of the total acceleration measured by the mobile phone accelerometer. A working smartphone application for the Android platform has been developed and experimental data have been used to train and validate the proposed method.",
"title": ""
},
{
"docid": "0db28b5ec56259c8f92f6cc04d4c2601",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "289799853b2de7ae3a4e8fdb5b09c88f",
"text": "Zero-confirmation transactions, i.e. transactions that have been broadcast but are still pending to be included in the blockchain, have gained attention in order to enable fast payments in Bitcoin, shortening the time for performing payments. Fast payments are desirable in certain scenarios, for instance, when buying in vending machines, fast food restaurants, or withdrawing from an ATM. Despite being quickly propagated through the network, zero-confirmation transactions are not protected against double-spending attacks, since the double-spending protection Bitcoin offers relies on the blockchain and, by definition, such transactions are not yet included in it. In this paper, we propose a double-spending prevention mechanism for Bitcoin zero-confirmation transactions. Our proposal is based on exploiting the flexibility of the Bitcoin scripting language together with a well-known vulnerability of the ECDSA signature scheme to discourage attackers from performing such an attack.",
"title": ""
},
{
"docid": "0f56b99bc1d2c9452786c05242c89150",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "7dbac30cfffaa90addff5ffa6c8bb056",
"text": "This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost.",
"title": ""
},
{
"docid": "fd317c492ed68bf14bdef38c27ed6696",
"text": "The systematic study of subcellular location patterns is required to fully characterize the human proteome, as subcellular location provides critical context necessary for understanding a protein's function. The analysis of tens of thousands of expressed proteins for the many cell types and cellular conditions under which they may be found creates a need for automated subcellular pattern analysis. We therefore describe the application of automated methods, previously developed and validated by our laboratory on fluorescence micrographs of cultured cell lines, to analyze subcellular patterns in tissue images from the Human Protein Atlas. The Atlas currently contains images of over 3000 protein patterns in various human tissues obtained using immunohistochemistry. We chose a 16 protein subset from the Atlas that reflects the major classes of subcellular location. We then separated DNA and protein staining in the images, extracted various features from each image, and trained a support vector machine classifier to recognize the protein patterns. Our results show that our system can distinguish the patterns with 83% accuracy in 45 different tissues, and when only the most confident classifications are considered, this rises to 97%. These results are encouraging given that the tissues contain many different cell types organized in different manners, and that the Atlas images are of moderate resolution. The approach described is an important starting point for automatically assigning subcellular locations on a proteome-wide basis for collections of tissue images such as the Atlas.",
"title": ""
},
{
"docid": "da9e4fd8627a8a2aae41374094483083",
"text": "With the popularity of cloud computing, mobile devices can store/retrieve personal data from anywhere at any time. Consequently, the data security problem in mobile cloud becomes more and more severe and prevents further development of mobile cloud. There are substantial studies that have been conducted to improve the cloud security. However, most of them are not applicable for mobile cloud since mobile devices only have limited computing resources and power. Solutions with low computational overhead are in great need for mobile cloud applications. In this paper, we propose a lightweight data sharing scheme (LDSS) for mobile cloud computing. It adopts CP-ABE, an access control technology used in normal cloud environment, but changes the structure of access control tree to make it suitable for mobile cloud environments. LDSS moves a large portion of the computational intensive access control tree transformation in CP-ABE from mobile devices to external proxy servers. Furthermore, to reduce the user revocation cost, it introduces attribute description fields to implement lazy-revocation, which is a thorny issue in program based CP-ABE systems. The experimental results show that LDSS can effectively reduce the overhead on the mobile device side when users are sharing data in mobile cloud environments.",
"title": ""
},
{
"docid": "a0e9e04a3b04c1974951821d44499fa7",
"text": "PURPOSE\nTo examine factors related to turnover of new graduate nurses in their first job.\n\n\nDESIGN\nData were obtained from a 3-year panel survey (2006-2008) of the Graduates Occupational Mobility Survey that followed-up college graduates in South Korea. The sample consisted of 351 new graduates whose first job was as a full-time registered nurse in a hospital.\n\n\nMETHODS\nSurvival analysis was conducted to estimate survival curves and related factors, including individual and family, nursing education, hospital, and job dissatisfaction (overall and 10 specific job aspects).\n\n\nFINDINGS\nThe estimated probabilities of staying in their first job for 1, 2, and 3 years were 0.823, 0.666, and 0.537, respectively. Nurses reporting overall job dissatisfaction had significantly lower survival probabilities than those who reported themselves to be either neutral or satisfied. Nurses were more likely to leave if they were married or worked in small (vs. large), nonmetropolitan, and nonunionized hospitals. Dissatisfaction with interpersonal relationships, work content, and physical work environment was associated with a significant increase in the hazards of leaving the first job.\n\n\nCONCLUSIONS\nHospital characteristics as well as job satisfaction were significantly associated with new graduates' turnover.\n\n\nCLINICAL RELEVANCE\nThe high turnover of new graduates could be reduced by improving their job satisfaction, especially with interpersonal relationships, work content, and the physical work environment.",
"title": ""
},
{
"docid": "f4cdcee11f9baad11ddaa96857908b0a",
"text": "Neural hardware has undergone rapid development during the last few years. This paper presents an overview of neural hardware projects within industries and academia. It describes digital, analog, and hybrid neurochips and accelerator boards as well as large-scale neurocomputers built from general purpose processors and communication elements. Special attention is given to multiprocessor projects that focus on scalability, flexibility, and adaptivity of the design and thus seem suitable for brain-style (cognitive) processing. The sources used for this overview are taken from journal papers, conference proceedings, data sheets, and ftp-sites and present an up-to-date overview of current state-of-the-art neural hardware implementations. 1 Categorization of neural hardware This paper presents an overview of time-multiplexed hardware designs, some of which are already commercially available, others representing design studies being carried out by research groups. A large number of design studies are being carried out in the US, Japan, and Europe. In many cases these studies concern design concepts of neurocomputers that will never be built in full. Neurocomputer building is expensive in terms of development time and resources, and little is known about the real commercial prospects for working implementations. Moreover, there is no clear consensus on how to exploit the currently available VLSI and even ultra large-scale integration (ULSI) technological capabilities for massively parallel neural network hardware implementations. Another reason for not actually building neurocomputers might lie in the fact that the number and variety of (novel) neural network paradigms is still increasing rapidly. For many paradigms the capabilities are hardly known yet. Paradoxically, these capabilities can only be tested in full when dedicated hardware is available. Commercially available products mostly consist of dedicated implementations of well known and successful paradigms like multi-layer perceptrons with backpropagation learning (e.g., Rumelhart & McClelland, 1986), the Hopfield (e.g., Hopfield, 1982), or the Kohonen models (e.g., Kohonen, 1989). These dedicated implementations in general do not offer much flexibility for the simulation of alternative paradigms. More interesting implementations, from a scientific as opposed to an application viewpoint, can mainly be found only in research laboratories. Dedicated neural hardware forms the sixth computer generation. The first four computer generations are mainly distinguished by the implementation technology used. They were built up respectively from 1 This is a draft version based on Chapter 3 in: Heemskerk, J.N.H. (1995). Neurocomputers for BrainStyle Processing. Design, Implementation and Application . PhD thesis, Unit of Experimental and Theoretical Psychology Leiden University, The Netherlands. This draft is ftp-able from: ftp.mrc-apu.cam.ac.uk/pub/nn. 2 Jan N.H. Heemskerk vacuum tubes, transistors, integrated circuitry, and VLSI. Third and fourth generation machines were the first parallel machines. Fifth generation computer systems were defined as knowledge based systems, originally designed to accept sensory inputs (Carling, 1992). They formed a combination of AI software and parallel hardware for operations like logical inference and data retrieval. The sixth generation can be seen as an integration of insights into computer design and programming from cognitive science and neuroscience. The machine of the future must perform a cooperative computation system that integrates different subsystems, each quite specialized in structure and some supplied with sensors and motors. A number of these subsystems should implement huge neural networks. Some of the neural hardware implementations described in this overview allow for fast real-world interaction via sensors and effectors and can thus function as action oriented systems (Arbib, 1989). A large number of parallel neural network implementation studies have been carried out on existing massively parallel machines, simply because neural hardware was not available. Although these machines were not specially designed for neural implementations, in many cases very high performance rates have been obtained. Many examples can be found in the literature, for instance: implementation of backpropagation networks has been performed on the Connection Machine (Singer, 1990), Warp (Pomerleau et al., 1988), MasPar (Chinn et al., 1990; Grasjki, 1992), Hughes (Shams & Gaudiot, 1990), GF11 (Witbrock & Zagha, 1989), AAP-2 (Watanabe et al., 1989), transputer based machines (e.g., Vuurpijl, 1992), and the CRAY YM-P supercomputer (Leung & Setiono, 1993). Much can be learned from these studies about programming neural functions and mapping networks onto pre-specified architectures. Several successful architectural issues have been re-applied in the newer dedicated neurocomputer designs. This paper is limited to an overview of electronic learning neural hardware and will not outline the architectures of general purpose massively parallel computers (supercomputers), or neurocomputer designs based on other (although very promising) implementation techniques such as opto-electronics, electro-chemical, and molecular techniques. Other neural hardware overviews have been written by Treleaven (1989), Nordström et al. (1992), Vellasco (1992), Ienne (1993b), Glesner and Pöchmüller (1994), and Ramacher and Rückert (1991). This neural hardware overview will be given by grouping the different approaches into four main categories, according to a scheme given by Rückert (1993); see Figure 1. Speed performance increases with (bottom) category from left to right. The first two main categories consist of neurocomputers based on standard ICs. They consist of Accelerator boards which speed up a conventional computer like a PC or workstation, and parallel multiprocessor systems, which mostly run stand alone and can be monitored by a host computer. In these approaches, where standard parts are used, designers can concentrate fully on developing one technology at a time. The other main categories consist of neurocomputers built from dedicated neural ASICs (application specific integrated circuits). These neurochips can be digital, analog, or hybrid. The number of world wide ongoing projects is already too large to allow a complete overview. Special attention will be given to projects which enable the implementation of action oriented systems and supports the following issues: speed, adaptivity, flexibility, and scalability. Reported performance rates are in (million) CPS and CUPS and are taken from publications. They only serve as indications and have to be compared with care since the implementations differ in precision, and size. Furthermore, adequate benchmarks have not yet been generally developed. An often used benchmark for the learning and recall phase of backpropagation networks is NETtalk, which translates text to phonemes (Sejnowksi and Rosenberg, 1987). Other hardware benchmark proposals have for instance been made by Ienne (1993a) and Van Keulen et al. (1994). 2 Neurocomputers consisting of a conventional computer + accelerator board Accelerator boards are the most frequently used neural commercial hardware, because they are relatively cheap, widely available, simple to connect to the PC or workstation, and typically provided with user 3 Overview of neural hardware friendly software tools. In some cases users are already familiar with the software tools, having used them on Fig.1. Neurocomputer categories. their standard, non-accelerated systems. (An overview of neural network simulators can be found in Murre, 1995) Such users will only notice the addition of an accelerator board by the reduced waiting times when training or running a neural model. The speed-up that can be achieved is at about one order of magnitude compared to sequential implementations. Some accelerators come with up to 20 different versions of paradigms which allow for a lot of experimenting. A drawback of this kind of neurocomputer and commercially available software simulators is that they lack flexibility and thus do not offer many possibilities for setting up novel paradigms. Some commercially available accelerator systems will now be briefly reviewed. ANZA plus (HNC [Hecht-Nielsen Cooperation] CA, USA) The ANZA coprocessor boards plug in to the backplane of a PC. They contain a Motorola NC68020 processor and a MC68881 floating point coprocessor. Performance rates are given in Treleaven (1989): 1M virtual PEs, 1.5 M Connections, 1.500 CUPS, 6 MCPS. Software: User Interface Subroutine Library. SAIC SIGMA-1 (Science Applications International Corporation) This is a Delta floating point processor based system to be plugged into a PC. Performance: 3.1 M virtual PEs and interconnections, 11 MCUPS. Software: neural net library ANSim and ANSpec, an object oriented language (from: Treleaven, 1989). NT6000 (Neural Technologies Limited, HNC, and California Scientific Software) Neural Technologies Limited is an organization which offers a wide range of commercially available neural products. The NT6000st and NT6000hs, for instance, are neural network plug-in PC-cards that fulfill the need for intelligent data acquisition. They are equipped with a (TMS320) DSP and an NISP (Neural Instruction Set Processor), which speeds up neural processing to 2 MCPS. Products like these are supplied by well developed software packages that allow for interfacing to other commercially available neural network simulators like BrainMaker and NeuralWorks (see Murre, 1995). The NT6000 series are well suited 4 Jan N.H. Heemskerk to be connected to both analog and digital input/output systems (from: Neural Technologies data sheet, 1993). All this together makes a standard (386 or 486) based PC an interesting neural network implementation tool. The implemented neural network paradigms are, however, limited to back",
"title": ""
},
{
"docid": "e1ed9d36e7b84ce7dcc74ac5f684ea76",
"text": "As integrated circuits (ICs) continue to have an overwhelming presence in our digital information-dominated world, having trust in their manufacture and distribution mechanisms is crucial. However, with ever-shrinking transistor technologies, the cost of new fabrication facilities is becoming prohibitive, pushing industry to make greater use of potentially less reliable foreign sources for their IC supply. The 2008 Computer Security Awareness Week (CSAW) Embedded Systems Challenge at the Polytechnic Institute of NYU highlighted some of the vulnerabilities of the IC supply chain in the form of a hardware hacking challenge. This paper explores the design and implementation of our winning entry.",
"title": ""
},
{
"docid": "4720a84220e37eca1d0c75697f247b23",
"text": "We describe a form of nonlinear decomposition that is well-suited for efficient encoding of natural signals. Signals are initially decomposed using a bank of linear filters. Each filter response is then rectified and divided by a weighted sum of rectified responses of neighboring filters. We show that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively. These results suggest that nonlinear response properties of sensory neurons are not an accident of biological implementation, but have an important functional role.",
"title": ""
},
{
"docid": "18498166845b27890110c3ca0cd43d86",
"text": "Raine Mäntysalo The purpose of this article is to make an overview of postWWII urban planning theories from the point of view of participation. How have the ideas of public accountability, deliberative democracy and involvement of special interests developed from one theory to another? The urban planning theories examined are rational-comprehensive planning theory, advocacy planning theory, incrementalist planning theory and the two branches of communicative planning theory: planning as consensus-seeking and planning as management of conflicts.",
"title": ""
},
{
"docid": "3cb25b6438593a36c6867a2edbbd6136",
"text": "One of the most significant challenges of human-robot interaction research is designing systems which foster an appropriate level of trust in their users: in order to use a robot effectively and safely, a user must place neither too little nor too much trust in the system. In order to better understand the factors which influence trust in a robot, we present a survey of prior work on trust in automated systems. We also discuss issues specific to robotics which pose challenges not addressed in the automation literature, particularly related to reliability, capability, and adjustable autonomy. We conclude with the results of a preliminary web-based questionnaire which illustrate some of the biases which autonomous robots may need to overcome in order to promote trust in users.",
"title": ""
}
] |
scidocsrr
|
b893e6d4add98f2b44b1983897b707c8
|
FHM + : Faster High-Utility Itemset Mining Using Length Upper-Bound Reduction
|
[
{
"docid": "6d8e7574f75b19edaee0b2cc8d4c1383",
"text": "High-utility itemset mining (HUIM) is an important data mining task with wide applications. In this paper, we propose a novel algorithm named EFIM (EFficient high-utility Itemset Mining), which introduces several new ideas to more efficiently discovers high-utility itemsets both in terms of execution time and memory. EFIM relies on two upper-bounds named sub-tree utility and local utility to more effectively prune the search space. It also introduces a novel array-based utility counting technique named Fast Utility Counting to calculate these upper-bounds in linear time and space. Moreover, to reduce the cost of database scans, EFIM proposes efficient database projection and transaction merging techniques. An extensive experimental study on various datasets shows that EFIM is in general two to three orders of magnitude faster and consumes up to eight times less memory than the state-of-art algorithms dHUP, HUI-Miner, HUP-Miner, FHM and UP-Growth+.",
"title": ""
}
] |
[
{
"docid": "c5c64d7fcd9b4804f7533978026dcfbd",
"text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "3cf4fe068901b9d4ccdcaea2232d7d4e",
"text": "Schizophrenia (SZ) is a complex mental disorder associated with genetic variations, brain development and activities, and environmental factors. There is an increasing interest in combining genetic, epigenetic and neuroimaging datasets to explore different level of biomarkers for the correlation and interaction between these diverse factors. Sparse Multi-Canonical Correlation Analysis (sMCCA) is a powerful tool that can analyze the correlation of three or more datasets. In this paper, we propose the sMCCA model for imaging genomics study. We show the advantage of sMCCA over sparse CCA (sCCA) through the simulation testing, and further apply it to the analysis of real data (SNPs, fMRI and methylation) from schizophrenia study. Some new genes and brain regions related to SZ disease are discovered by sMCCA and the relationships among these biomarkers are further discussed.",
"title": ""
},
{
"docid": "ed447f3f4bbe8478e9e1f3c4593dbf1b",
"text": "We revisit the fundamental question of Bitcoin's security against double spending attacks. While previous work has bounded the probability that a transaction is reversed, we show that no such guarantee can be effectively given if the attacker can choose when to launch the attack. Other approaches that bound the cost of an attack have erred in considering only limited attack scenarios, and in fact it is easy to show that attacks may not cost the attacker at all. We therefore provide a different interpretation of the results presented in previous papers and correct them in several ways. We provide different notions of the security of transactions that provide guarantees to different classes of defenders: merchants who regularly receive payments, miners, and recipients of large one-time payments. We additionally consider an attack that can be launched against lightweight clients, and show that these are less secure than their full node counterparts and provide the right strategy for defenders in this case as well. Our results, overall, improve the understanding of Bitcoin's security guarantees and provide correct bounds for those wishing to safely accept transactions.",
"title": ""
},
{
"docid": "6b4efbb3572eeb09536e2ec82825f2fb",
"text": "Well-designed games are good motivators by nature, as they imbue players with clear goals and a sense of reward and fulfillment, thus encouraging them to persist and endure in their quests. Recently, this motivational power has started to be applied to non- game contexts, a practice known as Gamification. This adds gaming elements to non-game processes, motivating users to adopt new behaviors, such as improving their physical condition, working more, or learning something new. This paper describes an experiment in which game-like elements were used to improve the delivery of a Master's level College course, including scoring, levels, leaderboards, challenges and badges. To assess how gamification impacted the learning experience, we compare the gamified course to its non-gamified version from the previous year, using different performance measures. We also assessed student satisfaction as compared to other regular courses in the same academic context. Results were very encouraging, showing significant increases ranging from lecture attendance to online participation, proactive behaviors and perusing the course reference materials. Moreover, students considered the gamified instance to be more motivating, interesting and easier to learn as compared to other courses. We finalize by discussing the implications of these results on the design of future gamified learning experiences.",
"title": ""
},
{
"docid": "1841d05590d1173711a2d47824a979cc",
"text": "Heater plates or sheets that are visibly transparent have many interesting applications in optoelectronic devices such as displays, as well as in defrosting, defogging, gas sensing and point-of-care disposable devices. In recent years, there have been many advances in this area with the advent of next generation transparent conducting electrodes (TCE) based on a wide range of materials such as oxide nanoparticles, CNTs, graphene, metal nanowires, metal meshes and their hybrids. The challenge has been to obtain uniform and stable temperature distribution over large areas, fast heating and cooling rates at low enough input power yet not sacrificing the visible transmittance. This review provides topical coverage of this important research field paying due attention to all the issues mentioned above.",
"title": ""
},
{
"docid": "30fb0e394f6c4bf079642cd492229b67",
"text": "Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act(CALEA), use a standard interface provided in network switches.\n This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "39b5095283fd753013c38459a93246fd",
"text": "OBJECTIVE\nTo determine whether cannabis use in adolescence predisposes to higher rates of depression and anxiety in young adulthood.\n\n\nDESIGN\nSeven wave cohort study over six years.\n\n\nSETTING\n44 schools in the Australian state of Victoria.\n\n\nPARTICIPANTS\nA statewide secondary school sample of 1601 students aged 14-15 followed for seven years.\n\n\nMAIN OUTCOME MEASURE\nInterview measure of depression and anxiety (revised clinical interview schedule) at wave 7.\n\n\nRESULTS\nSome 60% of participants had used cannabis by the age of 20; 7% were daily users at that point. Daily use in young women was associated with an over fivefold increase in the odds of reporting a state of depression and anxiety after adjustment for intercurrent use of other substances (odds ratio 5.6, 95% confidence interval 2.6 to 12). Weekly or more frequent cannabis use in teenagers predicted an approximately twofold increase in risk for later depression and anxiety (1.9, 1.1 to 3.3) after adjustment for potential baseline confounders. In contrast, depression and anxiety in teenagers predicted neither later weekly nor daily cannabis use.\n\n\nCONCLUSIONS\nFrequent cannabis use in teenage girls predicts later depression and anxiety, with daily users carrying the highest risk. Given recent increasing levels of cannabis use, measures to reduce frequent and heavy recreational use seem warranted.",
"title": ""
},
{
"docid": "494636aeb3d02c02cce1db18b4ce63ee",
"text": "AIMS/BACKGROUND\nThe objective of this review was to define the impact of cementation mode on the longevity of different types of single tooth restorations and fixed dental prostheses (FDP).\n\n\nMETHODS\nLiterature search by PubMed as the major database was used utilizing the terms namely, adhesive techniques, all-ceramic crowns, cast-metal, cement, cementation, ceramic inlays, gold inlays, metal-ceramic, non-bonded fixed-partial-dentures, porcelain veneers, resin-bonded fixed-partial-dentures, porcelain-fused-to-metal, and implant-supported-restorations together with manual search of non-indexed literature. Cementation of root canal posts and cores were excluded. Due to lack of randomized prospective clinical studies in some fields of cementation, recommendations had to be based on lower evidence level (Centre of Evidence Based Medicine, Oxford) for special applications of current cements.\n\n\nRESULTS\nOne-hundred-and-twenty-five articles were selected for the review. The primary function of the cementation is to establish reliable retention, a durable seal of the space between the tooth and the restoration, and to provide adequate optical properties. The various types of cements used in dentistry could be mainly divided into two groups: Water-based cements and polymerizing cements. Water-based cements exhibited satisfying long-term clinical performance associated with cast metal (inlays, onlays, partial crowns) as well as single unit metal-ceramic FDPs and multiple unit FDPs with macroretentive preparation designs and adequate marginal fit. Early short-term clinical results with high-strength all-ceramic restorations luted with water-based cements are also promising. Current polymerizing cements cover almost all fields of water-based cements and in addition to that they are mainly indicated for non-retentive restorations. They are able to seal the tooth completely creating hybrid layer formation. Furthermore, adhesive capabilities of polymerizing cements allowed for bonded restorations, promoting at the same time the preservation of dental tissues.",
"title": ""
},
{
"docid": "a129ad8154320f7be949527843207b89",
"text": "Availability of several web services having a similar functionality has led to using quality of service (QoS) attributes to support services selection and management. To improve these operations and be performed proactively, time series ARIMA models have been used to forecast the future QoS values. However, the problem is that in this extremely dynamic context the observed QoS measures are characterized by a high volatility and time-varying variation to the extent that existing ARIMA models cannot guarantee accurate QoS forecasting where these models are based on a homogeneity (constant variation over time) assumption, which can introduce critical problems such as proactively selecting a wrong service and triggering unrequired adaptations and thus leading to follow-up failures and increased costs. To address this limitation, we propose a forecasting approach that integrates ARIMA and GARCH models to be able to capture the QoS attributes' volatility and provide accurate forecasts. Using QoS datasets of real-world web services we evaluate the accuracy and performance aspects of the proposed approach. Results show that the proposed approach outperforms the popular existing ARIMA models and improves the forecasting accuracy of QoS measures and violations by on average 28.7% and 15.3% respectively.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "92c3738d8873eb223a5a478cc76c95b0",
"text": "Visual target tracking is one of the major fields in computer vision system. Object tracking has many practical applications such as automated surveillance system, military guidance, traffic management system, fault detection system, artificial intelligence and robot vision system. But it is difficult to track objects with image sensor. Especially, multiple objects tracking is harder than single object tracking. This paper proposes multiple objects tracking algorithm based on the Kalman filter. Our algorithm uses the Kalman filter as many as the number of moving objects in the image frame. If many moving objects exist in the image, however, we obtain multiple measurements. Therefore, precise data association is necessary in order to track multiple objects correctly. Another problem of multiple objects tracking is occlusion that causes merge and split. For solving these problems, this paper defines the cost function using some factors. Experiments using Matlab show that the performance of the proposed algorithm is appropriate for multiple objects tracking in real-time.",
"title": ""
},
{
"docid": "21393a1c52b74517336ef3e08dc4d730",
"text": "The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.",
"title": ""
},
{
"docid": "3e0dd3cf428074f21aaf202342003554",
"text": "Despite significant recent work, purely unsupervised techniques for part-of-speech (POS) tagging have not achieved useful accuracies required by many language processing tasks. Use of parallel text between resource-rich and resource-poor languages is one source of weak supervision that significantly improves accuracy. However, parallel text is not always available and techniques for using it require multiple complex algorithmic steps. In this paper we show that we can build POS-taggers exceeding state-of-the-art bilingual methods by using simple hidden Markov models and a freely available and naturally growing resource, the Wiktionary. Across eight languages for which we have labeled data to evaluate results, we achieve accuracy that significantly exceeds best unsupervised and parallel text methods. We achieve highest accuracy reported for several languages and show that our approach yields better out-of-domain taggers than those trained using fully supervised Penn Treebank.",
"title": ""
},
{
"docid": "f8def1217137641547921e3f52c0b4ae",
"text": "A 50-GHz charge pump phase-locked loop (PLL) utilizing an LC-oscillator-based injection-locked frequency divider (ILFD) was fabricated in 0.13-mum logic CMOS process. The PLL can be locked from 45.9 to 50.5 GHz and output power level is around -10 dBm. The operating frequency range is increased by tracking the self-oscillation frequencies of the voltage-controlled oscillator (VCO) and the frequency divider. The PLL including buffers consumes 57 mW from 1.5/0.8-V supplies. The phase noise at 50 kHz, 1 MHz, and 10 MHz offset from the carrier is -63.5, -72, and -99 dBc/Hz, respectively. The PLL also outputs second-order harmonics at frequencies between 91.8 and 101 GHz. The output frequency of 101 GHz is the highest for signals locked by a PLL fabricated using the silicon integrated circuits technology.",
"title": ""
},
{
"docid": "c773efb805899ee9e365b5f19ddb40bc",
"text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.",
"title": ""
},
{
"docid": "cb7b6c586f106518e234d893a341b238",
"text": "For more than thirty years, people have relied primarily on screen-based text and graphics to interact with computers. Whether the screen is placed on a desk, held in one’s hand, worn on one’s head, or embedded in the physical environment, the screen has cultivated a predominantly visual paradigm of humancomputer interaction. In this chapter, we discuss a growing space of interfaces in which physical objects play a central role as both physical representations and controls for digital information. We present an interaction model and key characteristics for such “tangible user interfaces,” and explore these characteristics in a number of interface examples. This discussion supports a newly integrated view of both recent and previous work, and points the way towards new kinds of computationally-mediated interfaces that more seamlessly weave together the physical and digital worlds.",
"title": ""
},
{
"docid": "30279db171fffe6fac561541a5d175ca",
"text": "Deformable displays can provide two major benefits compared to rigid displays: Objects of different shapes and deformabilities, situated in our physical environment, can be equipped with deformable displays, and users can benefit from their pre-existing knowledge about the interaction with physical objects when interacting with deformable displays. In this article we present InformationSense, a large, highly deformable cloth display. The article contributes to two research areas in the context of deformable displays: It presents an approach for the tracking of large, highly deformable surfaces, and it presents one of the first UX analyses of cloth displays that will help with the design of future interaction techniques for this kind of display. The comparison of InformationSense with a rigid display interface unveiled the trade-off that while users are able to interact with InformationSense more naturally and significantly preferred InformationSense in terms of joy of use, they preferred the rigid display interfaces in terms of efficiency. This suggests that deformable displays are already suitable if high hedonic qualities are important but need to be enhanced with additional digital power if high pragmatic qualities are required.",
"title": ""
}
] |
scidocsrr
|
0802e3c8c5b07284ddadb0a7e110972b
|
ARMin II - 7 DoF rehabilitation robot: mechanics and kinematics
|
[
{
"docid": "1b8e90d78ca21fcaa5cca628cba4111a",
"text": "The Rutgers Master II-ND glove is a haptic interface designed for dextrous interactions with virtual environments. The glove provides force feedback up to 16 N each to the thumb, index, middle, and ring fingertips. It uses custom pneumatic actuators arranged in a direct-drive configuration in the palm. Unlike commercial haptic gloves, the direct-drive actuators make unnecessary cables and pulleys, resulting in a more compact and lighter structure. The force-feedback structure also serves as position measuring exoskeleton, by integrating noncontact Hall-effect and infrared sensors. The glove is connected to a haptic-control interface that reads its sensors and servos its actuators. The interface has pneumatic servovalves, signal conditioning electronics, A/D/A boards, power supply and an imbedded Pentium PC. This distributed computing assures much faster control bandwidth than would otherwise be possible. Communication with the host PC is done over an RS232 line. Comparative data with the CyberGrasp commercial haptic glove is presented.",
"title": ""
}
] |
[
{
"docid": "3e9845c255b5e816741c04c4f7cf5295",
"text": "This paper presents the packaging technology and the integrated antenna design for a miniaturized 122-GHz radar sensor. The package layout and the assembly process are shortly explained. Measurements of the antenna including the flip chip interconnect are presented that have been achieved by replacing the IC with a dummy chip that only contains a through-line. Afterwards, radiation pattern measurements are shown that were recorded using the radar sensor as transmitter. Finally, details of the fully integrated radar sensor are given, together with results of the first Doppler measurements.",
"title": ""
},
{
"docid": "c5c62c1cee291e8ba9e3ed6e04da146d",
"text": "Traumatic brain injury (TBI) is a leading cause of death and disability among persons in the United States. Each year, an estimated 1.5 million Americans sustain a TBI. As a result of these injuries, 50,000 people die, 230,000 people are hospitalized and survive, and an estimated 80,000-90,000 people experience the onset of long-term disability. Rates of TBI-related hospitalization have declined nearly 50% since 1980, a phenomenon that may be attributed, in part, to successes in injury prevention and also to changes in hospital admission practices that shift the care of persons with less severe TBI from inpatient to outpatient settings. The magnitude of TBI in the United States requires public health measures to prevent these injuries and to improve their consequences. State surveillance systems can provide reliable data on injury causes and risk factors, identify trends in TBI incidence, enable the development of cause-specific prevention strategies focused on populations at greatest risk, and monitor the effectiveness of such programs. State follow-up registries, built on surveillance systems, can provide more information regarding the frequency and nature of disabilities associated with TBI. This information can help states and communities to design, implement, and evaluate cost-effective programs for people living with TBI and for their families, addressing acute care, rehabilitation, and vocational, school, and community support.",
"title": ""
},
{
"docid": "a33ccc1d1f906b2f09669166a1fe093c",
"text": "A writer’s style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write.",
"title": ""
},
{
"docid": "2caea7f13980ea4a48fb8e8bb71842f1",
"text": "Internet of Things, commonly known as IoT is a promising area in technology that is growing day by day. It is a concept whereby devices connect with each other or to living things. Internet of Things has shown its great benefits in today’s life. Agriculture is one amongst the sectors which contributes a lot to the economy of Mauritius and to get quality products, proper irrigation has to be performed. Hence proper water management is a must because Mauritius is a tropical island that has gone through water crisis since the past few years. With the concept of Internet of Things and the power of the cloud, it is possible to use low cost devices to monitor and be informed about the status of an agricultural area in real time. Thus, this paper provides the design and implementation of a Smart Irrigation and Monitoring System which makes use of Microsoft Azure machine learning to process data received from sensors in the farm and weather forecasting data to better inform the farmers on the appropriate moment to start irrigation. The Smart Irrigation and Monitoring System is made up of sensors which collect data such as air humidity, air temperature, and most importantly soil moisture data. These data are used to monitor the air quality and water content of the soil. The raw data are transmitted to the",
"title": ""
},
{
"docid": "554b82dc9820bae817bac59e81bf798a",
"text": "This paper proposed a 4-channel parallel 40 Gb/s front-end amplifier (FEA) in optical receiver for parallel optical transmission system. A novel enhancement type regulated cascade (ETRGC) configuration with an active inductor is originated in this paper for the transimpedance amplifier to significantly increase the bandwidth. The technique of three-order interleaving active feedback expands the bandwidth of the gain stage of transimpedance amplifier and limiting amplifier. Experimental results show that the output swing is 210 mV (Vpp) when the input voltage varies from 5 mV to 500 mV. The power consumption of the 4-channel parallel 40 Gb/s front-end amplifier (FEA) is 370 mW with 1.8 V power supply and the chip area is 650 μm×1300 μm.",
"title": ""
},
{
"docid": "1e92b67253b520187c923ba92e7f30d1",
"text": "Availability of high speed internet and wide use of mobile phones leads to gain the popularity to IoT. One such important concept of the same is the use of mobile phones by working parents to watch the activities of baby while babysitting. This paper presents the design of Smart Cradle which supports such video monitoring. This cradle swings automatically on detection of baby cry sound. Also it activates buzzer and gives alerts on phone if-first, baby cry continues till specific time which means now cradle cannot handle baby and baby needs personal attention and second, if the mattress in the cradle is wet. This cradle has an automatic rotating toy for baby's entertainment which will reduce the baby cry possibility.",
"title": ""
},
{
"docid": "b91f80bc17de9c4e15ec80504e24b045",
"text": "Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight encryption scheme, referred to as Hummingbird, and its applications to a privacy-preserving identification and mutual authentication protocol for RFID applications. Hummingbird can provide the designed security with a small block size and is therefore expected to meet the stringent response time and power consumption requirements described in the ISO protocol without any modification of the current standard. We show that Hummingbird is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we investigate some properties for integrating the Hummingbird into a privacypreserving identification and mutual authentication protocol.",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "0a7a2cfe41f1a04982034ef9cb42c3d4",
"text": "The biocontrol agent Torymus sinensis has been released into Japan, the USA, and Europe to suppress the Asian chestnut gall wasp, Dryocosmus kuriphilus. In this study, we provide a quantitative assessment of T. sinensis effectiveness for suppressing gall wasp infestations in Northwest Italy by annually evaluating the percentage of chestnuts infested by D. kuriphilus (infestation rate) and the number of T. sinensis adults that emerged per 100 galls (emergence index) over a 9-year period. We recorded the number of T. sinensis adults emerging from a total of 64,000 galls collected from 23 sampling sites. We found that T. sinensis strongly reduced the D. kuriphilus population, as demonstrated by reduced galls and an increased T. sinensis emergence index. Specifically, in Northwest Italy, the infestation rate was nearly zero 9 years after release of the parasitoid with no evidence of resurgence in infestation levels. In 2012, the number of T. sinensis females emerging per 100 galls was approximately 20 times higher than in 2009. Overall, T. sinensis proved to be an outstanding biocontrol agent, and its success highlights how the classical biological control approach may represent a cost-effective tool for managing an exotic invasive pest.",
"title": ""
},
{
"docid": "5527521d567290192ea26faeb6e7908c",
"text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.",
"title": ""
},
{
"docid": "0b41c2e8be4b9880a834b44375eb6c75",
"text": "We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot.",
"title": ""
},
{
"docid": "181463723aaaf766e387ea292cba8d5d",
"text": "Computational thinking has been promoted in recent years as a skill that is as fundamental as being able to read, write, and do arithmetic. However, what computational thinking really means remains speculative. While wonders, discussions and debates will likely continue, this article provides some analysis aimed to further the understanding of the notion. It argues that computational thinking is likely a hybrid thinking paradigm that must accommodate different thinking modes in terms of the way each would influence what we do in computation. Furthermore, the article makes an attempt to define computational thinking and connect the (potential) thinking elements to the known thinking paradigms. Finally, the author discusses some implications of the analysis.",
"title": ""
},
{
"docid": "77cea98467305b9b3b11de8d3cec6ec2",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "7cc3da275067df8f6c017da37025856c",
"text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.",
"title": ""
},
{
"docid": "ed6543545ec40cf1b197dcd31bcad9d5",
"text": "Stroke is the leading cause of death and adult disability worldwide. Mitochondrial dysfunction has been regarded as one of the hallmarks of ischemia/reperfusion (I/R) induced neuronal death. Maintaining the function of mitochondria is crucial in promoting neuron survival and neurological improvement. In this article, we review current progress regarding the roles of mitochondria in the pathological process of cerebral I/R injury. In particular, we emphasize on the most critical mechanisms responsible for mitochondrial quality control, as well as the recent findings on mitochondrial transfer in acute stroke. We highlight the potential of mitochondria as therapeutic targets for stroke treatment and provide valuable insights for clinical strategies.",
"title": ""
},
{
"docid": "02fd763f6e15b07187e3cbe0fd3d0e18",
"text": "The Batcher`s bitonic sorting algorithm is a parallel sorting algorithm, which is used for sorting the numbers in modern parallel machines. There are various parallel sorting algorithms such as radix sort, bitonic sort, etc. It is one of the efficient parallel sorting algorithm because of load balancing property. It is widely used in various scientific and engineering applications. However, Various researches have worked on a bitonic sorting algorithm in order to improve up the performance of original batcher`s bitonic sorting algorithm. In this paper, tried to review the contribution made by these researchers.",
"title": ""
},
{
"docid": "47d997ef6c4f70105198415002c2c5dc",
"text": "The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multiple-input multiple-output (MIMO) system and a downlink multi-user multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used.",
"title": ""
},
{
"docid": "2f1acb3378e5281efac7db5b3371b131",
"text": "Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves stateof-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.1",
"title": ""
},
{
"docid": "e2f5feaa4670bc1ae21d7c88f3d738e3",
"text": "Orofacial clefts are common birth defects and can occur as isolated, nonsyndromic events or as part of Mendelian syndromes. There is substantial phenotypic diversity in individuals with these birth defects and their family members: from subclinical phenotypes to associated syndromic features that is mirrored by the many genes that contribute to the etiology of these disorders. Identification of these genes and loci has been the result of decades of research using multiple genetic approaches. Significant progress has been made recently due to advances in sequencing and genotyping technologies, primarily through the use of whole exome sequencing and genome-wide association studies. Future progress will hinge on identifying functional variants, investigation of pathway and other interactions, and inclusion of phenotypic and ethnic diversity in studies.",
"title": ""
},
{
"docid": "4ec7480aeb1b3193d760d554643a1660",
"text": "The ability to learn is arguably the most crucial aspect of human intelligence. In reinforcement learning, we attempt to formalize a certain type of learning that is based on rewards and penalties. These supervisory signals should guide an agent to learn optimal behavior. In particular, this research focuses on deep reinforcement learning, where the agent should learn to play video games solely from pixel input. This thesis contributes to deep reinforcement learning research by assessing several variations to an existing state-of-the-art algorithm. First, we provide an extensive analysis on how the design decisions of the agent’s deep neural network affect its performance. Second, we introduce a novel neural layer that allows for local specializations in the visual input of the agents, as opposed to the global weight sharing that occurs in convolutional layers. Third, we introduce a ‘what’ and ‘where’ neural network architecture, inspired by the information flow of the visual cortical areas in the human brain. Finally, we explore prototype based deep reinforcement learning by introducing a novel output layer that is largely inspired by learning vector quantization. In a subset of our experiments, we show substantial improvements compared to existing alternatives.",
"title": ""
}
] |
scidocsrr
|
85b826ebc9d413bc2f8cafc15f97553b
|
Deep Metric Learning for Visual Understanding: An Overview of Recent Advances
|
[
{
"docid": "fa82b75a3244ef2407c2d14c8a3a5918",
"text": "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.",
"title": ""
}
] |
[
{
"docid": "8ebdf482a0a258722906a26d26164ba6",
"text": "Vehicle detection is a challenging problem in autonomous driving systems, due to its large structural and appearance variations. In this paper, we propose a novel vehicle detection scheme based on multi-task deep convolutional neural networks (CNNs) and region-of-interest (RoI) voting. In the design of CNN architecture, we enrich the supervised information with subcategory, region overlap, bounding-box regression, and category of each training RoI as a multi-task learning framework. This design allows the CNN model to share visual knowledge among different vehicle attributes simultaneously, and thus, detection robustness can be effectively improved. In addition, most existing methods consider each RoI independently, ignoring the clues from its neighboring RoIs. In our approach, we utilize the CNN model to predict the offset direction of each RoI boundary toward the corresponding ground truth. Then, each RoI can vote those suitable adjacent bounding boxes, which are consistent with this additional information. The voting results are combined with the score of each RoI itself to find a more accurate location from a large number of candidates. Experimental results on the real-world computer vision benchmarks KITTI and the PASCAL2007 vehicle data set show that our approach achieves superior performance in vehicle detection compared with other existing published works.",
"title": ""
},
{
"docid": "eec40573db841727a1410e5408ae43ed",
"text": "The design of a compact low-loss magic-T is proposed. The planar magic-T incorporates the compact microstrip-slotline tee junction and small microstrip-slotline transition area to reduce slotline radiation. The experimental results show that the magic-T produces broadband in-phase and out-of-phase power combiner/divider responses, has an average in-band insertion loss of 0.3 dB and small in-band phase and amplitude imbalance of less than plusmn 1.6deg and plusmn 0.3 dB, respectively.",
"title": ""
},
{
"docid": "24a78bcc7c60ab436f6fd32bdc0d7661",
"text": "Passing the Turing Test is not a sensible goal for Artificial Intelligence. Adherence to Turing's vision from 1950 is now actively harmful to our field. We review problems with Turing's idea, and suggest that, ironically, the very cognitive science that he tried to create must reject his research goal.",
"title": ""
},
{
"docid": "a81f2102488e6d9599a5796b1b6eba57",
"text": "A content based image retrieval system (CBIR) is proposed to assist the dermatologist for diagnosis of skin diseases. First, after collecting the various skin disease images and their text information (disease name, symptoms and cure etc), a test database (for query image) and a train database of 460 images approximately (for image matching) are prepared. Second, features are extracted by calculating the descriptive statistics. Third, similarity matching using cosine similarity and Euclidian distance based on the extracted features is discussed. Fourth, for better results first four images are selected during indexing and their related text information is shown in the text file. Last, the results shown are compared according to doctor’s description and according to image content in terms of precision and recall and also in terms of a self developed scoring system. Keyword: Cosine similarity, Euclidian distance, Precision, Recall, Query image. 1. Basic introduction to cbir CBIR differs from classical information retrieval in that image databases are essentially unstructured, since digitized images consist purely of arrays of pixel intensities, with no inherent meaning. One of the key issues with any kind of image processing is the need to extract useful information from the raw data (such as recognizing the presence of particular shapes or textures) before any kind of reasoning about the image’s contents is possible. An example may make this clear. Many police forces now use automatic face recognition systems. Such systems may be used in one of two ways. Firstly, the image in front of the camera may be compared with a single individual’s database record to verify his or her identity. In this case, only two images are matched, a process few observers would call CBIR[15]. Secondly, the entire database may be searched to find the most closely matching images. This is a genuine example of CBIR. 2. Structure of CBIR model Basic modules and their brief discussion of a CBIR modal is described in the following Figure 1.Content based image retrieval system consists of following modules: Feature Extraction: In this module the features of interest are calculated for image database. Fig.1 Modules of CBIR modal Feature extraction of query image: This module calculates the feature of the query image. Query image can be a part of image database or it may not be a part of image database. Similarity measure: This module compares the feature database of the existing images with the query image on basis of the similarity measure of the interest[2]. Image Database Feature database Feature Extraction Results images Query image Indexing Similarity measure Feature extraction of query image ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 4, No.5 , September 2013 ISSN : 2322-5157 www.ACSIJ.org 89 Copyright (c) 2013 Advances in Computer Science: an International Journal. All Rights Reserved. Indexing: This module performs filtering of images based on their content would provide better indexing and return more accurate results. Retrieval and Result: This module will display the matching images to the user based on indexing of similarity measure. Basic Components of the CBIR system are: Image Database: Database which stores images. It can be normal drive storage or database storage. Feature database: The entire extracted feature are stored in database like mat file, excel sheets etc. 3. Scope of CBIR for skin disease images Skin diseases are well known to be a large family. The identification of a certain skin disease is a complex and demanding task for dermatologist. A computer aided system can reduce the work load of the dermatologists, especially when the image database is immense. However, most contemporary work on computer aided analysis skin disease focuses on the detection of malignant melanoma. Thus, the features they used are very limited. The goal of our work is to build a retrieval algorithm for the more general diagnosis of various types of skin diseases. It can be very complex to define the features that can best distinguish between classes and yet be consistent within the same class of skin disease. Image and related Text Database is collected from a demonologist’s websites [17, 18]. There are mainly two kinds of methods for the application of a computer assistant. One is text query. A universally accepted and comprehensive dermatological terminology is created, and then example images are located and viewed using dermatological diagnostic concepts using a partial or complete word search. But the use of only descriptive annotation is too coarse and it is easy to make different types of disease fall into same category. The other method is to use visual features derived from color images of the diseased skin. The ability to perform reliable and consistent clinical research in dermatology hinges not only on the ability to accurately describe and codify diagnostic information, but also complex visual data. Visual patterns and images are at the core of dermatology education, research and practice. Visual features are broadly used in melanoma research, skin classification and segmentation. But there is a lack of tools using content-based skin image retrieval. 4. Problem formulation However, with the emergence of massive image databases, the traditional manual and text based search suffers from the following limitations: Manual annotations require too much time and are expensive to implement. As the number of images in a database grows, the difficulty in finding desired images increases. It is not feasible to manually annotate all attributes of the image content for large number of images. Manual annotations fail to deal with the discrepancy of subjective perception. The phrase, “an image says more than a thousand words,” implies a Content-Based Approach to Medical Image Database Retrieval that the textual description is not sufficient for depicting subjective perception. Typically, a medical image usually contains several objects, which convey specific information. Nevertheless, different interpretations for a pathological area can be made by different radiologists. To capture all knowledge, concepts, thoughts, and feelings for the content of any images is almost impossible. 5. Methodology of work 5.1General approach The general approach of image retrieval systems is based on query by image content. Figure 2 illustrate an overview of the image retrieval modal of skin disease images of proposed work. Fig.2 Overview of the Image query based skin disease image retrieval process FIRST FOUR RESULT IMAGES AND CORRESPONDI NG TEXT INFORMATION SKIN DISEASE IMAGE RETRIVAL SYSTEM IMAGE PRE PROCESSING RELATED SKIN DISEASE IMAGES (TRAIN DATABASE) AND TEXT INFO QUERY IMAGE FROM TEST DATABASE FEEDBACK FROM USER TEST DATABASE TEXT DATABASE TRAIN DATABASE ACSIJ Advances in Computer Science: an International Journal, Vol. 2, Issue 4, No.5 , September 2013 ISSN : 2322-5157 www.ACSIJ.org 90 Copyright (c) 2013 Advances in Computer Science: an International Journal. All Rights Reserved. 5.2 Database details : Our train database contains total 460 images (approximately) which are divided into twenty eight classes of skin disease, collected from reputed websites of medical images [17,18]. Test database contains images which are selected as query image. In the present work size of train database and test database is same. All the images are in .JPEG format. Images pixel dimension is set 300X300 by preprocessing. The illumination condition was also unknown for each image. Also, the images were collected with various backgrounds. Text database corresponding to each image contains skin disease name, symptoms, cure, and description of the disease. 5.3 Use Of Descriptive Statistics Parameters for Feature Extraction Statistical texture measures are calculated directly from the original image values, like mean, standard deviation, variance, kurtosis and Skewness [13], which do not consider pixel neighborhood relationships. Statistical measure of randomness that can be used to characterize the texture of the input image. Standard deviation is pixel value analysis feature [11]. First order statistics of the gray level allocation for each image matrix I(x, y) were examined through five commonly used metrics, namely, mean, variance, standard deviation, skewness and kurtosis as descriptive measurements of the overall gray level distribution of an image. Descriptive statistics refers to properties of distributions, such as location, dispersion, and shape [15]. 5.3.1 Location Measure: Location statistics describe where the data is located. Mean : For calculating the mean of element of vector x. ( ) = ( )/ if x is a matrix , compute the mean of each column and return them into a row vector[16]. 5.3.2 Dispersion Measures: Dispersion statistics summarize the scatter or spread of the data. Most of these functions describe deviation from a particular location. For instance, variance is a measure of deviation from the mean, and standard deviation is just the square root of the variance. Variance : For calculating the variance of element of vector x. ( ) = 1/(( − 1) _ ( ) − ( )^2) If x is a matrix , compute the variance of each column and return them into a row vector [16]. Standard Deviation: For calculating the Standard Deviation of element of vector x. ( ) = (1/( − 1) _ ( ( ) − ( ))^2) If x is a matrix , compute the Standard Deviation of each column and return them into a row vector[16]. 5.3.3 Shape Measures: For getting some information about the shape of a distribution using shape statistics. Skewness describes the amount of asymmetry. Kurtosis measures the concentration of data around the peak and in the tails versus the concentration in the flanks. Skewness: For calculating the skewness of element of vector x. ( ) = 1/ ( ) ^ (−3) (( − ( ). ^3) If x is a matrix, return the skewness along the first nonsingleton dimension of the matrix [",
"title": ""
},
{
"docid": "4bd7a933cf0d54a84c106a1591452565",
"text": "Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.",
"title": ""
},
{
"docid": "3ed0e387f8e6a8246b493afbb07a9312",
"text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.",
"title": ""
},
{
"docid": "8e077186aef0e7a4232eec0d8c73a5a2",
"text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3595188804ba47f745c7b6b8f17c45c0",
"text": "This paper presents a novel electrocardiogram (ECG) processing technique for joint data compression and QRS detection in a wireless wearable sensor. The proposed algorithm is aimed at lowering the average complexity per task by sharing the computational load among multiple essential signal-processing tasks needed for wearable devices. The compression algorithm, which is based on an adaptive linear data prediction scheme, achieves a lossless bit compression ratio of 2.286x. The QRS detection algorithm achieves a sensitivity (Se) of 99.64% and positive prediction (+P) of 99.81% when tested with the MIT/BIH Arrhythmia database. Lower overall complexity and good performance renders the proposed technique suitable for wearable/ambulatory ECG devices.",
"title": ""
},
{
"docid": "5acad83ce99c6403ef20bfa62672eafd",
"text": "A large class of sequential decision-making problems under uncertainty can be modeled as Markov and Semi-Markov Decision Problems, when their underlying probability structure has a Markov chain. They may be solved by using classical dynamic programming methods. However, dynamic programming methods suffer from the curse of dimensionality and break down rapidly in face of large state spaces. In addition, dynamic programming methods require the exact computation of the so-called transition probabilities, which are often hard to obtain and are hence said to suffer from the curse of modeling as well. In recent years, a simulation-based method, called reinforcement learning, has emerged in the literature. It can, to a great extent, alleviate stochastic dynamic programming of its curses by generating near-optimal solutions to problems having large state-spaces and complex transition mechanisms. In this paper, a simulation-based algorithm that solves Markov and Semi-Markov decision problems is presented, along with its convergence analysis. The algorithm involves a step-size based transformation on two time scales. Its convergence analysis is based on a recent result on asynchronous convergence of iterates on two time scales. We present numerical results from the new algorithm on a classical preventive maintenance case study of a reasonable size, where results on the optimal policy are also available. In addition, we present a tutorial that explains the framework of reinforcement learning in the context of semi-Markov decision problems for long-run average cost.",
"title": ""
},
{
"docid": "56c66b0c2698d63d9ef5f690688ee36d",
"text": "This article presents the author's personal reflection on how her nursing practice was enhanced as a result of losing her voice. Surprisingly, being unable to speak appeared to improve the nurse/patient relationship. Patients responded positively to a quiet approach and silent communication. Indeed, the skilled use of non-verbal communication through silence, facial expression, touch and closer physical proximity appeared to facilitate active listening, and helped to develop empathy, intuition and presence between the nurse and patient. Quietly 'being with' patients and communicating non-verbally was an effective form of communication. It is suggested that effective communication is dependent on the nurse's ability to listen and utilize non-verbal communication skills. In addition, it is clear that reflection on practical experience can be an important method of uncovering and exploring tacit knowledge in nursing.",
"title": ""
},
{
"docid": "5c0f2bcde310b7b76ed2ca282fde9276",
"text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.",
"title": ""
},
{
"docid": "b46a9871dc64327f1ab79fa22de084ce",
"text": "Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2^64). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.",
"title": ""
},
{
"docid": "48f7388fdf91a85cfeeee0d35e19c889",
"text": "Public key infrastructures (PKIs) are of crucial importance for the life of online services relying on certificate-based authentication, like e-commerce, e-government, online banking, as well as e-mail, social networking, cloud services and many others. One of the main points of failure (POFs) of modern PKIs concerns reliability and security of certificate revocation lists (CRLs), that must be available and authentic any time a certificate is used. Classically, the CRL for a set of certificates is maintained by the same (and sole) certification authority (CA) that issued the certificates, and this introduces a single POF in the system. We address this issue by proposing a solution in which multiple CAs share a public, decentralized and robust ledger where CRLs are collected. For this purpose, we consider the model of public ledgers based on blockchains, introduced for the use in cryptocurrencies, that is becoming a widespread solution for many online applications with stringent security and reliability requirements.",
"title": ""
},
{
"docid": "1672b30a74bf5d1111b1f0892b4018bc",
"text": "From the Divisions of Rheumatology, Allergy, and Immunology (M.R.M.) and Cardiology (D.M.D.); and the Departments of Radiology (J.Y.S.) and Pathology (R.P.H.), Massachusetts General Hospital; the Division of Rheumatology, Allergy, and Immunology, Brigham and Women’s Hospital (M.C.C.); and the Departments of Medicine (M.R.M., M.C.C., D.M.D.), Radiology (J.Y.S.), and Pathology (R.P.H.), Harvard Medical School — all in Boston.",
"title": ""
},
{
"docid": "b38f1dbd7b13c8b0ffd3277c5b62ba7f",
"text": "It is very difficult to find feasible QoS (Quality of service) routes in the mobile ad hoc networks (MANETs), because of the nature constrains of it, such as dynamic network topology, wireless communication link and limited process capability of nodes. In order to reduce average cost in flooding path discovery scheme of the traditional MANETs routing protocols and increase the probability of success in finding QoS feasible paths and It proposed a heuristic and distributed route discovery new method supports QoS requirement for MANETs in this study. This method integrates a distributed route discovery scheme with a Reinforcement Learning (RL) method that only utilizes the local information for the dynamic network environment; and the route expand scheme based on Cluster based Routing Algorithms (CRA) method to find more new feasible paths and avoid the problem of optimize timing in previous smart net Quality of service in MANET. In this paper proposed method Compared with traditional method, the experiment results shoItd the network performance is improved optimize timing, efficient and effective.",
"title": ""
},
{
"docid": "0701f4d74179857b736ebe2c7cdb78b7",
"text": "Modern computer networks generate significant volume of behavioural system logs on a daily basis. Such networks comprise many computers with Internet connectivity, and many users who access the Web and utilise Cloud services make use of numerous devices connected to the network on an ad-hoc basis. Measuring the risk of cyber attacks and identifying the most recent modus-operandi of cyber criminals on large computer networks can be difficult due to the wide range of services and applications running within the network, the multiple vulnerabilities associated with each application, the severity associated with each vulnerability, and the ever-changing attack vector of cyber criminals. In this paper we propose a framework to represent these features, enabling real-time network enumeration and traffic analysis to be carried out, in order to produce quantified measures of risk at specific points in time. We validate the approach using data from a University network, with a data collection consisting of 462,787 instances representing threats measured over a 144 hour period. Our analysis can be generalised to a variety of other contexts. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "f3fb98614d1d8ff31ca977cbf6a15a9c",
"text": "Paraphrase Identification and Semantic Similarity are two different yet well related tasks in NLP. There are many studies on these two tasks extensively on structured texts in the past. However, with the strong rise of social media data, studying these tasks on unstructured texts, particularly, social texts in Twitter is very interesting as it could be more complicated problems to deal with. We investigate and find a set of simple features which enables us to achieve very competitive performance on both tasks in Twitter data. Interestingly, we also confirm the significance of using word alignment techniques from evaluation metrics in machine translation in the overall performance of these tasks.",
"title": ""
},
{
"docid": "6a345950bb08717f52aeb87c859f72f2",
"text": "This paper presents Anonymouth, a novel framework for anonymizing writing style. Without accounting for style, anonymous authors risk identification. This framework is necessary to provide a tool for testing the consistency of anonymized writing style and a mechanism for adaptive attacks against stylometry techniques. Our framework defines the steps necessary to anonymize documents and implements them. A key contribution of this work is this framework, including novel methods for identifying which features of documents need to change and how they must be changed to accomplish document anonymization. In our experiment, 80% of the user study participants were able to anonymize their documents in terms of a fixed corpus and limited feature set used. However, modifying pre-written documents were found to be difficult and the anonymization did not hold up to more extensive feature sets. It is important to note that Anonymouth is only the first step toward a tool to acheive stylometric anonymity with respect to state-of-the-art authorship attribution techniques. The topic needs further exploration in order to accomplish significant anonymity.",
"title": ""
},
{
"docid": "58fb566facf511f6295126eebab521d7",
"text": "UNLABELLED\n Traditional wound tracing technique consists of tracing the perimeter of the wound on clear acetate with a fine-tip marker, then placing the tracing on graph paper and counting the grids to calculate the surface area. Standard wound measurement technique for calcu- lating wound surface area (wound tracing) was compared to a new wound measurement method using digital photo-planimetry software ([DPPS], PictZar® Digital Planimetry).\n\n\nMETHODS\nTwo hundred wounds of varying etiologies were measured and traced by experienced exam- iners (raters). Simultaneously, digital photographs were also taken of each wound. The digital photographs were downloaded onto a PC, and using DPPS software, the wounds were measured and traced by the same examiners. Accuracy, intra- and interrater reliability of wound measurements obtained from tracings and from DPPS were studied and compared. Both accuracy and rater variability were directly related to wound size when wounds were measured and traced in the tradi- tional manner.\n\n\nRESULTS\nIn small (< 4 cm2), regularly shaped (round or oval) wounds, both accuracy and rater reliability was 98% and 95%, respectively. However, in larger, irregularly shaped wounds or wounds with epithelial islands, DPPS was more accurate than traditional mea- suring (3.9% vs. 16.2% [average error]). The mean inter-rater reliabil- ity score was 94% for DPPS and 84% for traditional measuring. The mean intrarater reliability score was 98.3% for DPPS and 89.3% for traditional measuring. In contrast to traditional measurements, DPPS may provide a more objective assessment since it can be done by a technician who is blinded to the treatment plan. Planimetry of digital photographs allows for a closer examination (zoom) of the wound and better visibility of advancing epithelium.\n\n\nCONCLUSION\nMeasurements of wounds performed on digital photographs using planimetry software were simple and convenient. It was more accurate, more objective, and resulted in better correlation within and between examiners. .",
"title": ""
},
{
"docid": "78e4395a6bd6b4424813e20633d140b8",
"text": "This paper introduces a high-speed CMOS comparator. The comparator consists of a differential input stage, two regenerative flip-flops, and an S-R latch. No offset cancellation is exploited, which reduces the power consumption as well as the die area and increases the comparison speed. An experimental version of the comparator has been integrated in a standard double-poly double-metal 1.5-pm n-well process with a die area of only 140 x 100 pmz. This circuit, operating under a +2.5/– 2.5-V power supply, performs comparison to a precision of 8 b with a symmetrical input dynamic range of 2.5 V (therefore ~0.5 LSB resolution is equal to ~ 4.9 mV). input stage flip-flops S-R Iat",
"title": ""
}
] |
scidocsrr
|
aa949ae9603850c24e6f8218ac83fcda
|
A Cluster-then-label Semi-supervised Learning Approach for Pathology Image Classification
|
[
{
"docid": "e14801b902bad321870677c4a723ae2c",
"text": "We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach.",
"title": ""
}
] |
[
{
"docid": "4bc25964a496aec39ac751240783e62a",
"text": "A level graph G = (V; E;) is a directed acyclic graph with a mapping : V ! for i 6 = j, such that (v) (u) + 1 for each edge (u; v) 2 E. The level planarity testing problem is to decide if G can be drawn in the plane such that for each level V i , all v 2 V i are drawn on the line l i = f(x; k ? i) j x 2 Rg, the edges are drawn monotonically with respect to the vertical direction, and no edges intersect except at their end vertices. In order to draw a level planar graph without edge crossings, a level planar embedding of the level graph has to be computed. Level planar embeddings are characterized by linear orderings of the vertices in each V i (1 i k). We present an O(jV j) time algorithm for embedding level planar graphs. This approach is based on a level planarity test by J unger, Leipert, and Mutzel 1998].",
"title": ""
},
{
"docid": "80a9489262ee8d94d64dd8e475c060a3",
"text": "The effects of social-cognitive variables on preventive nutrition and behavioral intentions were studied in 580 adults at 2 points in time. The authors hypothesized that optimistic self-beliefs operate in 2 phases and made a distinction between action self-efficacy (preintention) and coping self-efficacy (postintention). Risk perceptions, outcome expectancies, and action self-efficacy were specified as predictors of the intention at Wave 1. Behavioral intention and coping self-efficacy served as mediators linking the 3 predictors with low-fat and high-fiber dietary intake 6 months later at Wave 2. Covariance structure analysis yielded a good model fit for the total sample and 6 subsamples created by a median split of 3 moderators: gender, age, and body weight. Parameter estimates differed between samples; the importance of perceived self-efficacy increased with age and weight.",
"title": ""
},
{
"docid": "37552cc90e02204afdd362a7d5978047",
"text": "In this talk we introduce visible light communication and discuss challenges and techniques to improve the performance of white organic light emitting diode (OLED) based systems.",
"title": ""
},
{
"docid": "3071b8a720277f0ab203a40aade90347",
"text": "The Internet became an indispensable part of people's lives because of the significant role it plays in the ways individuals interact, communicate and collaborate with each other. Over recent years, social media sites succeed in attracting a large portion of online users where they become not only content readers but also content generators and publishers. Social media users generate daily a huge volume of comments and reviews related to different aspects of life including: political, scientific and social subjects. In general, sentiment analysis refers to the task of identifying positive and negative opinions, emotions and evaluations related to an article, news, products, services, etc. Arabic sentiment analysis is conducted in this study using a small dataset consisting of 1,000 Arabic reviews and comments collected from Facebook and Twitter social network websites. The collected dataset is used in order to conduct a comparison between two free online sentiment analysis tools: SocialMention and SentiStrength that support Arabic language. The results which based on based on the two of classifiers (Decision tree (J48) and SVM) showed that the SentiStrength is better than SocialMention tool.",
"title": ""
},
{
"docid": "6c8b4fc313f631f7006f5409db8058f5",
"text": "This work proposes a new pixel structure based on amorphous indium-gallium-zinc-oxide thin-film transistors (a-IGZO TFTs) and a parallel addressing scheme for high-resolution active-matrix organic light-emitting diode (AMOLED) displays. The proposed circuit compensates for the nonuniformity of luminance that is caused by shifts in the threshold voltage (<inline-formula><tex-math notation=\"LaTeX\">${{V}}_{{\\rm{TH}}}$</tex-math></inline-formula>) and mobility of driving TFTs. Measurement results verify that the parallel addressing scheme successfully extends the compensation time and accurately detects the <inline-formula><tex-math notation=\"LaTeX\">${{V}}_{{\\rm{TH}}}$</tex-math> </inline-formula> of the driving TFT. Moreover, the proposed circuit reduces the variations of OLED luminance from more than 83% to less than 13% when the <inline-formula><tex-math notation=\"LaTeX\">${{V}}_{{\\rm{TH}}}$ </tex-math></inline-formula> and mobility of driving TFT shifts by 1 V and 30%, respectively, and the <inline-formula><tex-math notation=\"LaTeX\">${{V}}_{{\\rm{TH}}}$</tex-math></inline-formula> of OLED varies from 0 to 0.9 V.",
"title": ""
},
{
"docid": "4560e1b7318013be0688b8e73692fda4",
"text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.",
"title": ""
},
{
"docid": "75398378e62d40c05228cf8333ff6b0a",
"text": "This is a survey of the main methods in non-uniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods. Authors’ address: School of Computer Science, McGill University, 3480 University Street, Montreal, Canada H3A 2K6. The authors’ research was sponsored by NSERC Grant A3456 and FCAR Grant 90-ER-0291. 1. The main paradigms The purpose of this chapter is to review the main methods for generating random variables, vectors and processes. Classical workhorses such as the inversion method, the rejection method and table methods are reviewed in section 1. In section 2, we discuss the expected time complexity of various algorithms, and give a few examples of the design of generators that are uniformly fast over entire families of distributions. In section 3, we develop a few universal generators, such as generators for all log concave distributions on the real line. Section 4 deals with random variate generation when distributions are indirectly specified, e.g, via Fourier coefficients, characteristic functions, the moments, the moment generating function, distributional identities, infinite series or Kolmogorov measures. Random processes are briefly touched upon in section 5. Finally, the latest developments in Markov chain methods are discussed in section 6. Some of this work grew from Devroye (1986a), and we are carefully documenting work that was done since 1986. More recent references can be found in the book by Hörmann, Leydold and Derflinger (2004). Non-uniform random variate generation is concerned with the generation of random variables with certain distributions. Such random variables are often discrete, taking values in a countable set, or absolutely continuous, and thus described by a density. The methods used for generating them depend upon the computational model one is working with, and upon the demands on the part of the output. For example, in a ram (random access memory) model, one accepts that real numbers can be stored and operated upon (compared, added, multiplied, and so forth) in one time unit. Furthermore, this model assumes that a source capable of producing an i.i.d. (independent identically distributed) sequence of uniform [0, 1] random variables is available. This model is of course unrealistic, but designing random variate generators based on it has several advantages: first of all, it allows one to disconnect the theory of non-uniform random variate generation from that of uniform random variate generation, and secondly, it permits one to plan for the future, as more powerful computers will be developed that permit ever better approximations of the model. Algorithms designed under finite approximation limitations will have to be redesigned when the next generation of computers arrives. For the generation of discrete or integer-valued random variables, which includes the vast area of the generation of random combinatorial structures, one can adhere to a clean model, the pure bit model, in which each bit operation takes one time unit, and storage can be reported in terms of bits. Typically, one now assumes that an i.i.d. sequence of independent perfect bits is available. In this model, an elegant information-theoretic theory can be derived. For example, Knuth and Yao (1976) showed that to generate a random integer X described by the probability distribution {X = n} = pn, n ≥ 1, any method must use an expected number of bits greater than the binary entropy of the distribution, ∑",
"title": ""
},
{
"docid": "d8e9a4d827be8f75ce601935032d829c",
"text": "UNLABELLED\nThe management of amyelic thoracolumbar burst fractures remains controversial. In this study, we compared the clinical efficacy of percutaneous kyphoplasty (PKP) and short-segment pedicle instrumentation (SSPI). Twenty-three patients were treated with PKP, and 25 patients with SSPI. They all presented with Type A3 amyelic thoracolumbar fractures. Clinical outcomes were evaluated by a Visual Analog Scale (VAS) and Oswestry Disability Index (ODI) preoperatively, postoperatively, and at two years follow-up. Radiographic data including the anterior and posterior vertebral body height, kyphotic angle, as well as spinal canal compromise was also evaluated. The patients in both groups were similar regarding age, bone mineral density (BMD), follow-up period, severity of the deformity and fracture. Blood loss, operation time, and bed-rest time were less in the PKP group. VAS, ODI score improved more rapidly after surgery in the PKP group. No significant difference was found in VAS and ODI scores between the two groups at final follow-up (p > 0.05). Meanwhile, the height of anterior vertebrae (Ha), the height of posterior vertebrae (Hp) and the kyphosis angle showed significant improvement in each group (p < 0.05). The postoperative improvement in spinal canal compromise was not statistically significant in the PKP group (p > 0.05); there was a significant improvement in the SSPI group (p < 0.05). Moreover, these postoperative radiographic assessments showed significant differences between the two groups regarding the improvement of canal compromise (p < 0.05). At final follow-up, remodeling of spinal canal compromise was detected in both groups.\n\n\nCONCLUSION\nBoth PKP and SSPI appeared as effective and reliable operative techniques for selected amyelic thoracolumbar fractures in the short-term. PKP had a significantly smaller blood loss and shorter bed-rest time, but SSPI provided a better reduction. Long-time studies should be conducted to support these clinical outcomes.",
"title": ""
},
{
"docid": "139aec8f3f1053631232defba7984305",
"text": "Users of a social network like to follow the posts published by influential users. Such posts usually are delivered quickly and thus will produce a strong influence on public opinions. In this paper, we focus on the problem of identifying domaindependent influential users(or topic experts). Some of traditional approaches are based on the post contents of users users to identify influential users, which may be biased by spammers who try to make posts related to some topics through a simple copy and paste. Others make use of user authentication information given by a service platform or user self description (introduction or label) in finding influential users. However, what users have published is not necessarily related to what they have registed and described. In addition, if there is no comments from other users, its less objective to assess a users post quality. To improve effectiveness of recognizing influential users in a topic of microblogs, we propose a post-feature based approach which is supplementary to postcontent based approaches. Our experimental results show that the post-feature based approach produces relatively higher precision than that of the content based approach.",
"title": ""
},
{
"docid": "5b0842894cbf994c3e63e521f7352241",
"text": "The burgeoning field of genomics has revived interest in multiple testing procedures by raising new methodological and computational challenges. For example, microarray experiments generate large multiplicity problems in which thousands of hypotheses are tested simultaneously. Westfall and Young (1993) propose resampling-based p-value adjustment procedures which are highly relevant to microarray experiments. This article discusses different criteria for error control in resampling-based multiple testing, including (a) the family wise error rate of Westfall and Young (1993) and (b) the false discovery rate developed by Benjamini and Hochberg (1995), both from a frequentist viewpoint; and (c) the positive false discovery rate of Storey (2002a), which has a Bayesian motivation. We also introduce our recently developed fast algorithm for implementing the minP adjustment to control family-wise error rate. Adjusted p-values for different approaches are applied to gene expression data from two recently published microarray studies. The properties of these procedures for multiple testing are compared.",
"title": ""
},
{
"docid": "885542ef60e8c2dbcfe73d7158244f82",
"text": "Three decades of active research on the teaching of introductory programming has had limited effect on classroom practice. Although relevant research exists across several disciplines including education and cognitive science, disciplinary differences have made this material inaccessible to many computing educators. Furthermore, computer science instructors have not had access to a comprehensive survey of research in this area. This paper collects and classifies this literature, identifies important work and mediates it to computing educators and professional bodies.\n We identify research that gives well-supported advice to computing academics teaching introductory programming. Limitations and areas of incomplete coverage of existing research efforts are also identified. The analysis applies publication and research quality metrics developed by a previous ITiCSE working group [74].",
"title": ""
},
{
"docid": "e4bccc7e1da310439b44a533a3ed232b",
"text": "The long-term advancement (LTE) is the new mobile communication system, built after a redesigned physical part and predicated on an orthogonal regularity division multiple gain access to (OFDMA) modulation, features solid performance in challenging multipath surroundings and substantially boosts the performance of the cellular channel in conditions of pieces per second per Hertz (bps/Hz). Nevertheless, as all cordless systems, LTE is susceptible to radio jamming episodes. Such dangers have security implications especially regarding next-generation disaster response communication systems predicated on LTE technology. This proof concept paper overviews some new effective attacks (smart jamming) that extend the number and effectiveness of basic radio jamming. Predicated on these new hazards, some new potential security research guidelines are introduced, looking to improve the resiliency of LTE systems against such problems. A spread-spectrum modulation of the key downlink broadcast stations is coupled with a scrambling of the air tool allocation of the uplink control stations and a sophisticated system information subject matter encryption scheme.",
"title": ""
},
{
"docid": "8da939b67039eddb24db213337a65958",
"text": "Alistair S. Jump* and Josep Peñuelas Unitat d’Ecofisiologia CSICCEAB-CREAF, Centre de Recerca Ecològica i Aplicacions Forestals, Universitat Autònoma de Barcelona, E-08193, Bellaterra, Barcelona, Spain *Correspondence: E-mail: a.s.jump@creaf.uab.es Abstract Climate is a potent selective force in natural populations, yet the importance of adaptation in the response of plant species to past climate change has been questioned. As many species are unlikely to migrate fast enough to track the rapidly changing climate of the future, adaptation must play an increasingly important role in their response. In this paper we review recent work that has documented climate-related genetic diversity within populations or on the microgeographical scale. We then describe studies that have looked at the potential evolutionary responses of plant populations to future climate change. We argue that in fragmented landscapes, rapid climate change has the potential to overwhelm the capacity for adaptation in many plant populations and dramatically alter their genetic composition. The consequences are likely to include unpredictable changes in the presence and abundance of species within communities and a reduction in their ability to resist and recover from further environmental perturbations, such as pest and disease outbreaks and extreme climatic events. Overall, a range-wide increase in extinction risk is likely to result. We call for further research into understanding the causes and consequences of the maintenance and loss of climate-related genetic diversity within populations.",
"title": ""
},
{
"docid": "61038d16483587c5025ef7bcaf7e6bd1",
"text": "BACKGROUND\nMany prior studies have evaluated shoulder motion, yet no three-dimensional analysis comparing the combined clavicular, scapular, and humeral motion during arm elevation has been done. We aimed to describe and compare dynamic three-dimensional motion of the shoulder complex during raising and lowering the arm across three distinct elevation planes (flexion, scapular plane abduction, and coronal plane abduction).\n\n\nMETHODS\nTwelve subjects without a shoulder abnormality were enrolled. Transcortical pin placement into the clavicle, scapula, and humerus allowed electromagnetic motion sensors to be rigidly fixed. The subjects completed two repetitions of raising and lowering the arm in flexion, scapular, and abduction planes. Three-dimensional angles were calculated for sternoclavicular, acromioclavicular, scapulothoracic, and glenohumeral joint motions. Joint angles between humeral elevation planes and between raising and lowering of the arm were compared.\n\n\nRESULTS\nGeneral patterns of shoulder motion observed during humeral elevation were clavicular elevation, retraction, and posterior axial rotation; scapular internal rotation, upward rotation, and posterior tilting relative to the clavicle; and glenohumeral elevation and external rotation. Clavicular posterior rotation predominated at the sternoclavicular joint (average, 31 degrees). Scapular posterior tilting predominated at the acromioclavicular joint (average, 19 degrees). Differences between flexion and abduction planes of humerothoracic elevation were largest for the glenohumeral joint plane of elevation (average, 46 degrees).\n\n\nCONCLUSIONS\nOverall shoulder motion consists of substantial angular rotations at each of the four shoulder joints, enabling the multiple-joint interaction required to elevate the arm overhead.",
"title": ""
},
{
"docid": "a1306f761e45fdd56ae91d1b48909d74",
"text": "We propose a graphical model for representing networks of stochastic processes, the minimal generative model graph. It is based on reduced factorizations of the joint distribution over time. We show that under appropriate conditions, it is unique and consistent with another type of graphical model, the directed information graph, which is based on a generalization of Granger causality. We demonstrate how directed information quantifies Granger causality in a particular sequential prediction setting. We also develop efficient methods to estimate the topological structure from data that obviate estimating the joint statistics. One algorithm assumes upper bounds on the degrees and uses the minimal dimension statistics necessary. In the event that the upper bounds are not valid, the resulting graph is nonetheless an optimal approximation in terms of Kullback-Leibler (KL) divergence. Another algorithm uses near-minimal dimension statistics when no bounds are known, but the distribution satisfies a certain criterion. Analogous to how structure learning algorithms for undirected graphical models use mutual information estimates, these algorithms use directed information estimates. We characterize the sample-complexity of two plug-in directed information estimators and obtain confidence intervals. For the setting when point estimates are unreliable, we propose an algorithm that uses confidence intervals to identify the best approximation that is robust to estimation error. Last, we demonstrate the effectiveness of the proposed algorithms through the analysis of both synthetic data and real data from the Twitter network. In the latter case, we identify which news sources influence users in the network by merely analyzing tweet times.",
"title": ""
},
{
"docid": "281b0a108c1e8507f26381cc905ce9d1",
"text": "Extraction–Transform–Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.",
"title": ""
},
{
"docid": "25f0871346c370db4b26aecd08a9d75e",
"text": "This review presents a comprehensive discussion of the key technical issues in woody biomass pretreatment: barriers to efficient cellulose saccharification, pretreatment energy consumption, in particular energy consumed for wood-size reduction, and criteria to evaluate the performance of a pretreatment. A post-chemical pretreatment size-reduction approach is proposed to significantly reduce mechanical energy consumption. Because the ultimate goal of biofuel production is net energy output, a concept of pretreatment energy efficiency (kg/MJ) based on the total sugar recovery (kg/kg wood) divided by the energy consumption in pretreatment (MJ/kg wood) is defined. It is then used to evaluate the performances of three of the most promising pretreatment technologies: steam explosion, organosolv, and sulfite pretreatment to overcome lignocelluloses recalcitrance (SPORL) for softwood pretreatment. The present study found that SPORL is the most efficient process and produced highest sugar yield. Other important issues, such as the effects of lignin on substrate saccharification and the effects of pretreatment on high-value lignin utilization in woody biomass pretreatment, are also discussed.",
"title": ""
},
{
"docid": "f33e96f81e63510f0a5e34609a390c2d",
"text": "Authentication based on passwords is used largely in applications for computer security and privacy. However, human actions such as choosing bad passwords and inputting passwords in an insecure way are regarded as “the weakest link” in the authentication chain. Rather than arbitrary alphanumeric strings, users tend to choose passwords either short or meaningful for easy memorization. With web applications and mobile apps piling up, people can access these applications anytime and anywhere with various devices. This evolution brings great convenience but also increases the probability of exposing passwords to shoulder surfing attacks. Attackers can observe directly or use external recording devices to collect users’ credentials. To overcome this problem, we proposed a novel authentication system PassMatrix, based on graphical passwords to resist shoulder surfing attacks. With a one-time valid login indicator and circulative horizontal and vertical bars covering the entire scope of pass-images, PassMatrix offers no hint for attackers to figure out or narrow down the password even they conduct multiple camera-based attacks. We also implemented a PassMatrix prototype on Android and carried out real user experiments to evaluate its memorability and usability. From the experimental result, the proposed system achieves better resistance to shoulder surfing attacks while maintaining usability.",
"title": ""
},
{
"docid": "116e77b20db84c72364723a4c22cdb0a",
"text": "While many organizations turn to human computation labor markets for jobs with black-or-white solutions, there is vast potential in asking these workers for original thought and innovation.",
"title": ""
},
{
"docid": "5031c9b3dfbe2bf2a07a4f1414f594e0",
"text": "BACKGROUND\nWe assessed the effects of a three-year national-level, ministry-led health information system (HIS) data quality intervention and identified associated health facility factors.\n\n\nMETHODS\nMonthly summary HIS data concordance between a gold standard data quality audit and routine HIS data was assessed in 26 health facilities in Sofala Province, Mozambique across four indicators (outpatient consults, institutional births, first antenatal care visits, and third dose of diphtheria, pertussis, and tetanus vaccination) and five levels of health system data aggregation (daily facility paper registers, monthly paper facility reports, monthly paper district reports, monthly electronic district reports, and monthly electronic provincial reports) through retrospective yearly audits conducted July-August 2010-2013. We used mixed-effects linear models to quantify changes in data quality over time and associated health system determinants.\n\n\nRESULTS\nMedian concordance increased from 56.3% during the baseline period (2009-2010) to 87.5% during 2012-2013. Concordance improved by 1.0% (confidence interval [CI]: 0.60, 1.5) per month during the intervention period of 2010-2011 and 1.6% (CI: 0.89, 2.2) per month from 2011-2012. No significant improvements were observed from 2009-2010 (during baseline period) or 2012-2013. Facilities with more technical staff (aβ: 0.71; CI: 0.14, 1.3), more first antenatal care visits (aβ: 3.3; CI: 0.43, 6.2), and fewer clinic beds (aβ: -0.94; CI: -1.7, -0.20) showed more improvements. Compared to facilities with no stock-outs, facilities with five essential drugs stocked out had 51.7% (CI: -64.8 -38.6) lower data concordance.\n\n\nCONCLUSIONS\nA data quality intervention was associated with significant improvements in health information system data concordance across public-sector health facilities in rural and urban Mozambique. Concordance was higher at those facilities with more human resources for health and was associated with fewer clinic-level stock-outs of essential medicines. Increased investments should be made in data audit and feedback activities alongside targeted efforts to improve HIS data in low- and middle-income countries.",
"title": ""
}
] |
scidocsrr
|
b63171fc886e42c3811a5e8f25c9bd51
|
A New Chatbot for Customer Service on Social Media
|
[
{
"docid": "3f5b90fae38890515d312ed3753509ce",
"text": "Brand personality has been shown to affect a variety of user behaviors such as individual preferences and social interactions. Despite intensive research efforts in human personality assessment, little is known about brand personality and its relationship with social media. Leveraging the theory in marketing, we analyze how brand personality associates with its contributing factors embodied in social media. Based on the analysis of over 10K survey responses and a large corpus of social media data from 219 brands, we quantify the relative importance of factors driving brand personality. The brand personality model developed with social media data achieves predicted R values as high as 0.67. We conclude by illustrating how modeling brand personality can help users find brands suiting their personal characteristics and help companies manage brand perceptions.",
"title": ""
}
] |
[
{
"docid": "2f2be97ad06ded172333c29b32fd3f0d",
"text": "Measurement uncertainty is traditionally represented in the form of expanded uncertainty as defined through the Guide to the Expression of Uncertainty in Measurement (GUM). The International Organization for Standardization GUM represents uncertainty through confidence intervals based on the variances and means derived from probability density functions. A new approach to the evaluation of measurement uncertainty based on the polynomial chaos theory is presented and compared with the traditional GUM method",
"title": ""
},
{
"docid": "e5153a6faa79a9edaf17ad5acae7701a",
"text": "The Backtracking Spiral Algorithm (BSA) is a coverage strategy for mobile robots based on the use of spiral filling paths; in order to assure the completeness, unvisited regions are marked and covered by backtracking mechanism. The BSA basic algorithm is designed to work in an environment modeled by a coarse-grain grid. BSA has been extended to cover, not only the free cells, but also the partially occupied ones. In this paper, the concepts and algorithms used to extend BSA are introduced. The ideas used to extend BSA are generic, thus a similar approach can be used to extend most of the grid-based coverage algorithms. Finally, some simulation results that demonstrate that BSA performs a complete coverage are presented.",
"title": ""
},
{
"docid": "c1a4921eb85dc51e690c10649a582bf1",
"text": "System thinking skills are a prerequisite for acting successfully and responsibly in a complex world. However, traditional education largely fails to enhance system thinking skills whereas learner-centered educational methods seem more promising. Several such educational methods are compared with respect to their suitability for improving system thinking. It is proposed that integrated learning environments consisting of system dynamics models and additional didactical material have positive learning effects.This is exemplified by the illustration and validation of two learning sequences.",
"title": ""
},
{
"docid": "4eec5be6b29425e025f9e1b23b742639",
"text": "There is increasing interest in sharing the experience of products and services on the web platform, and social media has opened a way for product and service providers to understand their consumers needs and expectations. This paper explores reviews by cloud consumers that reflect consumers experiences with cloud services. The reviews of around 6,000 cloud service users were analysed using sentiment analysis to identify the attitude of each review, and to determine whether the opinion expressed was positive, negative, or neutral. The analysis used two data mining tools, KNIME and RapidMiner, and the results were compared. We developed four prediction models in this study to predict the sentiment of users reviews. The proposed model is based on four supervised machine learning algorithms: K-Nearest Neighbour (k-NN), Nave Bayes, Random Tree, and Random Forest. The results show that the Random Forest predictions achieve 97.06% accuracy, which makes this model a better prediction model than the other three.",
"title": ""
},
{
"docid": "cfc03176bd3417a5c0633aeb9c0ffed1",
"text": "Rho GTPases are key regulators of cytoskeletal dynamics and affect many cellular processes, including cell polarity, migration, vesicle trafficking and cytokinesis. These proteins are conserved from plants and yeast to mammals, and function by interacting with and stimulating various downstream targets, including actin nucleators, protein kinases and phospholipases. The roles of Rho GTPases have been extensively studied in different mammalian cell types using mainly dominant negative and constitutively active mutants. The recent availability of knockout mice for several members of the Rho family reveals new information about their roles in signalling to the cytoskeleton and in development.",
"title": ""
},
{
"docid": "fcc021f052f261c27cb67205692cd9ab",
"text": "Various studies showed that inhaled fine particles with diameter less than 10 micrometers (PM10) in the air can cause adverse health effects on human, such as heart disease, asthma, stroke, bronchitis and the like. This is due to their ability to penetrate further into the lung and alveoli. The aim of this study is to develop a state-of-art reliable technique to use surveillance camera for monitoring the temporal patterns of PM10 concentration in the air. Once the air quality reaches the alert thresholds, it will provide warning alarm to alert human to prevent from long exposure to these fine particles. This is important for human to avoid the above mentioned adverse health effects. In this study, an internet protocol (IP) network camera was used as an air quality monitoring sensor. It is a 0.3 mega pixel charge-couple-device (CCD) camera integrates with the associate electronics for digitization and compression of images. This network camera was installed on the rooftop of the school of physics. The camera observed a nearby hill, which was used as a reference target. At the same time, this network camera was connected to network via a cat 5 cable or wireless to the router and modem, which allowed image data transfer over the standard computer networks (Ethernet networks), internet, or even wireless technology. Then images were stored in a server, which could be accessed locally or remotely for computing the air quality information with a newly developed algorithm. The results were compared with the alert thresholds. If the air quality reaches the alert threshold, alarm will be triggered to inform us this situation. The newly developed algorithm was based on the relationship between the atmospheric reflectance and the corresponding measured air quality of PM10 concentration. In situ PM10 air quality values were measured with DustTrak meter and the sun radiation was measured simultaneously with a spectroradiometer. Regression method was use to calibrate this algorithm. Still images captured by this camera were separated into three bands namely red, green and blue (RGB), and then digital numbers (DN) were determined. These DN were used to determine the atmospherics reflectance values of difference bands, and then used these values in the newly developed algorithm to determine PM10 concentration. The results of this study showed that the proposed algorithm produced a high correlation coefficient (R2) of 0.7567 and low root-mean-square error (RMS) of plusmn 5 mu g/m3 between the measured and estimated PM10 concentration. A program was written by using microsoft visual basic 6.0 to download the still images automatically from the camera via the internet and utilize the newly developed algorithm to determine PM10 concentration automatically and continuously. This concluded that surveillance camera can be used for temporal PM10 concentration monitoring. It is more than an air pollution monitoring device; it provides continuous, on-line, real-time monitoring for air pollution at multi location and air pollution warning or alert system. This system also offers low implementation, operation and maintenance cost of ownership because the surveillance cameras become cheaper and cheaper now.",
"title": ""
},
{
"docid": "78d33d767f9eb15ef79a6d016ffcfb3a",
"text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1",
"title": ""
},
{
"docid": "5a392f4c9779c06f700e2ff004197de9",
"text": "Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by v oting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater beneet. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classiiers reduces this downside and also leads to slightly better results on most of the datasets considered.",
"title": ""
},
{
"docid": "54ee7c0a265e9c84d0bd31f79fc07923",
"text": "PURPOSE\nTo develop a guideline for the use of sentinel node biopsy (SNB) in early stage breast cancer.\n\n\nMETHODS\nAn American Society of Clinical Oncology (ASCO) Expert Panel conducted a systematic review of the literature available through February 2004 on the use of SNB in early-stage breast cancer. The panel developed a guideline for clinicians and patients regarding the appropriate use of a sentinel lymph node identification and sampling procedure from hereon referred to as SNB. The guideline was reviewed by selected experts in the field and the ASCO Health Services Committee and was approved by the ASCO Board of Directors.\n\n\nRESULTS\nThe literature review identified one published prospective randomized controlled trial in which SNB was compared with axillary lymph node dissection (ALND), four limited meta-analyses, and 69 published single-institution and multicenter trials in which the test performance of SNB was evaluated with respect to the results of ALND (completion axillary dissection). There are currently no data on the effect of SLN biopsy on long-term survival of patients with breast cancer. However, a review of the available evidence demonstrates that, when performed by experienced clinicians, SNB appears to be a safe and acceptably accurate method for identifying early-stage breast cancer without involvement of the axillary lymph nodes.\n\n\nCONCLUSION\nSNB is an appropriate initial alternative to routine staging ALND for patients with early-stage breast cancer with clinically negative axillary nodes. Completion ALND remains standard treatment for patients with axillary metastases identified on SNB. Appropriately identified patients with negative results of SNB, when done under the direction of an experienced surgeon, need not have completion ALND. Isolated cancer cells detected by pathologic examination of the SLN with use of specialized techniques are currently of unknown clinical significance. Although such specialized techniques are often used, they are not a required part of SLN evaluation for breast cancer at this time. Data suggest that SNB is associated with less morbidity than ALND, but the comparative effects of these two approaches on tumor recurrence or patient survival are unknown.",
"title": ""
},
{
"docid": "93ec9adabca7fac208a68d277040c254",
"text": "UNLABELLED\nWe developed cyNeo4j, a Cytoscape App to link Cytoscape and Neo4j databases to utilize the performance and storage capacities Neo4j offers. We implemented a Neo4j NetworkAnalyzer, ForceAtlas2 layout and Cypher component to demonstrate the possibilities a distributed setup of Cytoscape and Neo4j have.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe app is available from the Cytoscape App Store at http://apps.cytoscape.org/apps/cyneo4j, the Neo4j plugins at www.github.com/gsummer/cyneo4j-parent and the community and commercial editions of Neo4j can be found at http://www.neo4j.com.\n\n\nCONTACT\ngeorg.summer@gmail.com.",
"title": ""
},
{
"docid": "930b48ac25cb646322406c98bf0ae383",
"text": "The core technology of Bitcoin, the blockchain, has recently emerged as a disruptive innovation with a wide range of applications, potentially able to redesign our interactions in business, politics and society at large. Although scholarly interest in this subject is growing, a comprehensive analysis of blockchain applications from a political perspective is severely lacking to date. This paper aims to fill this gap and it discusses the key points of blockchain-based decentralized governance, which challenges to varying degrees the traditional mechanisms of State authority, citizenship and democracy. In particular, the paper verifies to which extent blockchain and decentralization platforms can be considered as hyper-political tools, capable to manage social interactions on large scale and dismiss traditional central authorities. The analysis highlights risks related to a dominant position of private powers in distributed ecosystems, which may lead to a general disempowerment of citizens and to the emergence of a stateless global society. While technological utopians urge the demise of any centralized institution, this paper advocates the role of the State as a necessary central point of coordination in society, showing that decentralization through algorithm-based consensus is an organizational theory, not a stand-alone political theory.",
"title": ""
},
{
"docid": "4e2e8bc4566ccd9718593ea460593b7d",
"text": "We present a method for simulating makeup in a face image. To generate realistic results without detailed geometric and reflectance measurements of the user, we propose to separate the image into intrinsic image layers and alter them according to proposed adaptations of physically-based reflectance models. Through this layer manipulation, the measured properties of cosmetic products are applied while preserving the appearance characteristics and lighting conditions of the target face. This approach is demonstrated on various forms of cosmetics including foundation, blush, lipstick, and eye shadow. Experimental results exhibit a close approximation to ground truth images, without artifacts such as transferred personal features and lighting effects that degrade the results of image-based makeup transfer methods.",
"title": ""
},
{
"docid": "9ff522e9874c924636f9daba90f9881a",
"text": "Time management is required in simulations to ensure temporal aspects of the system under investigation are correctly reproduced by the simulation model. This paper describes the time management services that have been defined in the High Level Architecture. The need for time management services is discussed, as well as design rationales that lead to the current definition of the HLA time management services. These services are described, highlighting information that must flow between federates and the Runtime Infrastructure (RTI) software in order to efficiently implement time management algorithms.",
"title": ""
},
{
"docid": "63737a22e2591a91884496ea7a1185b1",
"text": "Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time- and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPrime, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPrime benchmarks. They retain the original applications' performance characteristics, in particular their relative performance across platforms. Also, the result benchmarks, already released online, are much more compact and easy-to-port compared to the original applications.",
"title": ""
},
{
"docid": "cf2487747479351525f066fdcb35cc66",
"text": "A FUNDAMENTAL CHALLENGE FOR SYSTEMS NEUROSCIENCE IS TO QUANTITATIVELY RELATE ITS THREE MAJOR BRANCHES OF RESEARCH: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g., single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement (e.g., fMRI and invasive or scalp electrophysiology), and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices (RDMs), which characterize the information carried by a given representation in a brain or model. Building on a rich psychological and mathematical literature on similarity analysis, we propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs. We demonstrate RSA by relating representations of visual objects as measured with fMRI in early visual cortex and the fusiform face area to computational models spanning a wide range of complexities. The RDMs are simultaneously related via second-level application of multidimensional scaling and tested using randomization and bootstrap techniques. We discuss the broad potential of RSA, including novel approaches to experimental design, and argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience.",
"title": ""
},
{
"docid": "e5fa2011c64c3e1f7e9d97f545579d2b",
"text": "Remote health monitoring (RHM) can help save the cost burden of unhealthy lifestyles. Of increased popularity is the use of smartphones to collect data, measure physical activity, and provide coaching and feedback to users. One challenge with this method is to improve adherence to prescribed medical regimens. In this paper we present a new battery optimization method that increases the battery lifetime of smartphones which monitor physical activity. We designed a system, WANDA-CVD, to test our battery optimization method. The focus of this report describes our in-lab pilot study and a study aimed at reducing cardiovascular disease (CVD) in young women, the Women's Heart Health study. Conclusively, our battery optimization technique improved battery lifetime by 300%. This method also increased participant adherence to the remote health monitoring system in the Women's Heart Health study by 53%.",
"title": ""
},
{
"docid": "b5d7394bf1f47551263acef5142af975",
"text": "Available online 7 August 2014",
"title": ""
},
{
"docid": "28c19bf17c76a6517b5a7834216cd44d",
"text": "The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.",
"title": ""
},
{
"docid": "2f9d88a7848fc5954b3f9459d6b6dc59",
"text": "OBJECTIVE\nTo test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting, internal validity (bias and confounding) and power, but also for external validity.\n\n\nDESIGN\nA pilot version was first developed, based on epidemiological principles, reviews, and existing checklists for randomised studies. Face and content validity were assessed by three experienced reviewers and reliability was determined using two raters assessing 10 randomised and 10 non-randomised studies. Using different raters, the checklist was revised and tested for internal consistency (Kuder-Richardson 20), test-retest and inter-rater reliability (Spearman correlation coefficient and sign rank test; kappa statistics), criterion validity, and respondent burden.\n\n\nMAIN RESULTS\nThe performance of the checklist improved considerably after revision of a pilot version. The Quality Index had high internal consistency (KR-20: 0.89) as did the subscales apart from external validity (KR-20: 0.54). Test-retest (r 0.88) and inter-rater (r 0.75) reliability of the Quality Index were good. Reliability of the subscales varied from good (bias) to poor (external validity). The Quality Index correlated highly with an existing, established instrument for assessing randomised studies (r 0.90). There was little difference between its performance with non-randomised and with randomised studies. Raters took about 20 minutes to assess each paper (range 10 to 45 minutes).\n\n\nCONCLUSIONS\nThis study has shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies. It has also shown that it is possible to produce a checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses. Further work is required to improve the checklist and the training of raters in the assessment of external validity.",
"title": ""
}
] |
scidocsrr
|
f2a7c6baa682bc8ae5b929097c53644c
|
Face Recognition: A Novel Multi-Level Taxonomy based Survey
|
[
{
"docid": "a4c8e2938b976a37f38efc1ce5bc6286",
"text": "As a classic statistical model of 3D facial shape and texture, 3D Morphable Model (3DMM) is widely used in facial analysis, e.g., model fitting, image synthesis. Conventional 3DMM is learned from a set of well-controlled 2D face images with associated 3D face scans, and represented by two sets of PCA basis functions. Due to the type and amount of training data, as well as the linear bases, the representation power of 3DMM can be limited. To address these problems, this paper proposes an innovative framework to learn a nonlinear 3DMM model from a large set of unconstrained face images, without collecting 3D face scans. Specifically, given a face image as input, a network encoder estimates the projection, shape and texture parameters. Two decoders serve as the nonlinear 3DMM to map from the shape and texture parameters to the 3D shape and texture, respectively. With the projection parameter, 3D shape, and texture, a novel analytically-differentiable rendering layer is designed to reconstruct the original input face. The entire network is end-to-end trainable with only weak supervision. We demonstrate the superior representation power of our nonlinear 3DMM over its linear counterpart, and its contribution to face alignment and 3D reconstruction.",
"title": ""
}
] |
[
{
"docid": "22c9f931198f054e7994e7f1db89a194",
"text": "Learning a good distance metric plays a vital role in many multimedia retrieval and data mining tasks. For example, a typical content-based image retrieval (CBIR) system often relies on an effective distance metric to measure similarity between any two images. Conventional CBIR systems simply adopting Euclidean distance metric often fail to return satisfactory results mainly due to the well-known semantic gap challenge. In this article, we present a novel framework of Semi-Supervised Distance Metric Learning for learning effective distance metrics by exploring the historical relevance feedback log data of a CBIR system and utilizing unlabeled data when log data are limited and noisy. We formally formulate the learning problem into a convex optimization task and then present a new technique, named as “Laplacian Regularized Metric Learning” (LRML). Two efficient algorithms are then proposed to solve the LRML task. Further, we apply the proposed technique to two applications. One direct application is for Collaborative Image Retrieval (CIR), which aims to explore the CBIR log data for improving the retrieval performance of CBIR systems. The other application is for Collaborative Image Clustering (CIC), which aims to explore the CBIR log data for enhancing the clustering performance of image pattern clustering tasks. We conduct extensive evaluation to compare the proposed LRML method with a number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information. Encouraging results validate the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "065a63832bb4fe73fd0f44f16a09af6b",
"text": "The everyday auditory environment consists of multiple simultaneously active sources with overlapping temporal and spectral acoustic properties. Despite the seemingly chaotic composite signal impinging on our ears, the resulting perception is of an orderly \"auditory scene\" that is organized according to sources and auditory events, allowing us to select messages easily, recognize familiar sound patterns, and distinguish deviant or novel ones. Recent data suggest that these perceptual achievements are mainly based on processes of a cognitive nature (\"sensory intelligence\") in the auditory cortex. Even higher cognitive processes than previously thought, such as those that organize the auditory input, extract the common invariant patterns shared by a number of acoustically varying sounds, or anticipate the auditory events of the immediate future, occur at the level of sensory cortex (even when attention is not directed towards the sensory input).",
"title": ""
},
{
"docid": "9c7de005e64ba67981dd7d603b80ee35",
"text": "Streptococcus mitis (S. mitis) and Pseudomonas aeruginosa (P. aeruginosa) are typically found in the upper respiratory tract of infants. We previously found that P. aeruginosa and S. mitis were two of the most common bacteria in biofilms on newborns' endotracheal tubes (ETTs) and in their sputa and that S. mitis was able to produce autoinducer-2 (AI-2), whereas P. aeruginosa was not. Recently, we also found that exogenous AI-2 and S. mitis could influence the behaviors of P. aeruginosa. We hypothesized that S. mitis contributes to this interspecies interaction and that inhibition of AI-2 could result in inhibition of these effects. To test this hypothesis, we selected PAO1 as a representative model strain of P. aeruginosa and evaluated the effect of S. mitis as well as an AI-2 analog (D-ribose) on mono- and co-culture biofilms in both in vitro and in vivo models. In this context, S. mitis promoted PAO1 biofilm formation and pathogenicity. Dual-species (PAO1 and S. mitis) biofilms exhibited higher expression of quorum sensing genes than single-species (PAO1) biofilms did. Additionally, ETTs covered in dual-species biofilms increased the mortality rate and aggravated lung infection compared with ETTs covered in mono-species biofilms in an endotracheal intubation rat model, all of which was inhibited by D-ribose. Our results demonstrated that S. mitis AI-2 plays an important role in interspecies interactions with PAO1 and may be a target for inhibition of biofilm formation and infection in ventilator-associated pneumonia.",
"title": ""
},
{
"docid": "589da022358bee9f14b337db42536067",
"text": "To represent a text as a bag of properly identified “phrases” and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”. Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts.",
"title": ""
},
{
"docid": "b84d17054651cdc64e40b7e025014da2",
"text": "Querying complex graph databases such as knowledge graphs is a challenging task for non-professional users. Due to their complex schemas and variational information descriptions, it becomes very hard for users to formulate a query that can be properly processed by the existing systems. We argue that for a user-friendly graph query engine, it must support various kinds of transformations such as synonym, abbreviation, and ontology. Furthermore, the derived query results must be ranked in a principled manner. In this paper, we introduce a novel framework enabling schemaless and structureless graph querying (SLQ), where a user need not describe queries precisely as required by most databases. The query engine is built on a set of transformation functions that automatically map keywords and linkages from a query to their matches in a graph. It automatically learns an effective ranking model, without assuming manually labeled training examples, and can efficiently return top ranked matches using graph sketch and belief propagation. The architecture of SLQ is elastic for “plug-in” new transformation functions and query logs. Our experimental results show that this new graph querying paradigm is promising: It identifies high-quality matches for both keyword and graph queries over real-life knowledge graphs, and outperforms existing methods significantly in terms of effectiveness and efficiency.",
"title": ""
},
{
"docid": "6527c10c822c2446b7be928f86d3c8f8",
"text": "In this paper we present a novel algorithm for automatic analysis, transcription, and parameter extraction from isolated polyphonic guitar recordings. In addition to general score-related information such as note onset, duration, and pitch, instrumentspecific information such as the plucked string, the applied plucking and expression styles are retrieved automatically. For this purpose, we adapted several state-of-the-art approaches for onset and offset detection, multipitch estimation, string estimation, feature extraction, and multi-class classification. Furthermore we investigated a robust partial tracking algorithm with respect to inharmonicity, an extensive extraction of novel and known audio features as well as the exploitation of instrument-based knowledge in the form of plausability filtering to obtain more reliable prediction. Our system achieved very high accuracy values of 98 % for onset and offset detection as well as multipitch estimation. For the instrument-related parameters, the proposed algorithm also showed very good performance with accuracy values of 82 % for the string number, 93 % for the plucking style, and 83 % for the expression style. Index Terms playing techniques, plucking style, expression style, multiple fundamental frequency estimation, string classification, fretboard position, fingering, electric guitar, inharmonicity coefficient, tablature",
"title": ""
},
{
"docid": "671eb73ad86525cb183e2b8dbfe09947",
"text": "We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent’s experience. Because this loss is highly flexible in its ability to take into account the agent’s history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG’s learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.",
"title": ""
},
{
"docid": "ab85854fab566b49dd07ee9c9a9cf990",
"text": "A traveling-wave circularly-polarized microstrip array antenna is presented in this paper. It uses a circularly polarized dual-feed radiating element. The element is a rectangular patch with two chamfered corners. It is fed by microstrip lines, making it possible for the radiating element and feed lines to be realized and integrated in a single layer. A four-element array is designed, built and tested. Measured performance of the antenna is presented, where a good agreement between the simulated and measured results is obtained and demonstrated.",
"title": ""
},
{
"docid": "e979efa5b29a805afac43368a8ab14fc",
"text": "Class imbalance is one of the challenging problems for machine learning in many real-world applications. Cost-sensitive learning has attracted significant attention in recent years to solve the problem, but it is difficult to determine the precise misclassification costs in practice. There are also other factors that influence the performance of the classification including the input feature subset and the intrinsic parameters of the classifier. This paper presents an effective wrapper framework incorporating the evaluation measure (AUC and G-mean) into the objective function of cost sensitive SVM directly to improve the performance of classification by simultaneously optimizing the best pair of feature subset, intrinsic parameters and misclassification cost parameters. Experimental results on various standard benchmark datasets and real-world data with different ratios of imbalance show that the proposed method is effective in comparison with commonly used sampling techniques.",
"title": ""
},
{
"docid": "f4422ff5d89e2035d6480f6bc6eb5fb2",
"text": "Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.",
"title": ""
},
{
"docid": "c443ca07add67d6fc0c4901e407c68f2",
"text": "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.",
"title": ""
},
{
"docid": "0a3574fe7ebd3ff571ee9b098f0108b7",
"text": "With the progress and development of national economy as well as power system, reliability and safety issues of power system have been more important. Development of distribution Transformer Health Monitoring System (THMS) has been done in that reason. Distribution transformer is the most vital asset in any electrical distribution network and therefore it needs special care and attention. This THMS can monitor the health status of the distribution transformer in real time aspect. As a large number of transformers are distributed over a wide area in present electric systems, it's difficult to monitor the condition manually of every single transformer. So automatic data acquisition and transformer condition monitoring has been an important issue. This project presents design and implementation of a mobile embedded system to monitor load currents, over voltage, transformer oil level and oil temperature. The implementation on-line monitoring system integrates Global Service Mobile (GSM) Modem, with single chip microcontroller and sensors. It is installed at the distribution transformer site. The output values of sensors are processed and recorded in the system memory. System programmed with some predefined instructions to check abnormal conditions. If there is any abnormality on the system, the GSM module will send SMS (Short Message Service) messages to designated mobile telephones containing information about the abnormality according to the aforesaid predefined instructions. This mobile system will help the utilities to optimally utilize transformers and identify problems before any catastrophic failure occurs. This system will be an advanced step to the automation by diminishing human dependency. As it is a wireless communicating system, there is no need of large cables which are of high cost. Thus THMS offers a more improved transformer monitoring.",
"title": ""
},
{
"docid": "60f561722cf0aea09a691269c7768322",
"text": "Embedded electronic components, so-called ECU (Electronic Controls Units), are nowadays a prominent part of a car's architecture. These ECUs, monitoring and controlling the different subsystems of a car, are interconnected through several gateways and compose the global internal network of the car. Moreover, modern cars are now able to communicate with other devices through wired or wireless interfaces such as USB, Bluetooth, WiFi or even 3G. Such interfaces may expose the internal network to the outside world and can be seen as entry points for cyber attacks. In this paper, we present a survey on security threats and protection mechanisms in embedded automotive networks. After introducing the different protocols being used in the embedded networks of current vehicles, we then analyze the potential threats targeting these networks and describe how the attackers' opportunities can be enhanced by the new communication abilities of modern cars. Finally, we present the security solutions currently being devised to address these problems.",
"title": ""
},
{
"docid": "35ae4e59fd277d57c2746dfccf9b26b0",
"text": "In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.",
"title": ""
},
{
"docid": "77b9d8a71d5bdd0afdf93cd525950496",
"text": "One of the main tasks of a dialog system is to assign intents to user utterances, which is a form of text classification. Since intent labels are application-specific, bootstrapping a new dialog system requires collecting and annotating in-domain data. To minimize the need for a long and expensive data collection process, we explore ways to improve the performance of dialog systems with very small amounts of training data. In recent years, word embeddings have been shown to provide valuable features for many different language tasks. We investigate the use of word embeddings in a text classification task with little training data. We find that count and vector features complement each other and their combination yields better results than either type of feature alone. We propose a simple alternative, vector extrema, to replace the usual averaging of a sentence’s vectors. We show how taking vector extrema is well suited for text classification and compare it against standard vector baselines in three different applications.",
"title": ""
},
{
"docid": "dedf96c3e23dc7fd873c5fe27620a959",
"text": "This paper presents a monocular algorithm for front and rear vehicle detection, developed as part of the FP7 V-Charge project's perception system. The system is made of an AdaBoost classifier with Haar Features Decision Stump. It processes several virtual perspective images, obtained by un-warping 4 monocular fish-eye cameras mounted all-around an autonomous electric car. The target scenario is the automated valet parking, but the presented technique fits well in any general urban and highway environment. A great attention has been given to optimize the computational performance. The accuracy in the detection and a low computation costs are provided by combining a multiscale detection scheme with a Soft-Cascade classifier design. The algorithm runs in real time on the project's hardware platform. The system has been tested on a validation set, compared with several AdaBoost schemes, and the corresponding results and statistics are also reported.",
"title": ""
},
{
"docid": "c549fd965f95eb3a22bbc5f574b32b9e",
"text": "Branchial cleft cysts are benign lesions caused by anomalous development of the brachial cleft. This report describes a 20-year-old girl with swelling on the right lateral aspect of the neck, which expanded slowly but progressively. The clinical suspicion was that of a branchial cleft cyst. Sonography revealed a homogeneously hypo- to anechoic mass with well-defined margins and no intralesional septa. Color Doppler reviewed no internal vascularization. The ultrasound examination confirmed the clinical diagnosis of a second branchial cleft cyst, demonstrating the cystic nature of the mass and excluding the presence of complications. For superficial lesions like these, ultrasound is the first-level imaging study of choice because it is non-invasive, rapid, low-cost, and does not involve exposure to ionizing radiation.",
"title": ""
},
{
"docid": "a6ba94c0faf2fd41d8b1bd5a068c6d3d",
"text": "The main mechanisms responsible for performance degradation of millimeter wave (mmWave) and terahertz (THz) on-chip antennas are reviewed. Several techniques to improve the performance of the antennas and several high efficiency antenna types are presented. In order to illustrate the effects of the chip topology on the antenna, simulations and measurements of mmWave and THz on-chip antennas are shown. Finally, different transceiver architectures are explored with emphasis on the challenges faced in a wireless multi-core environment.",
"title": ""
},
{
"docid": "4a63f4357885287019095b5b736cd453",
"text": "In this paper, we present what we think is an elegant solution to some problems in the discourse-structural modelling of speech attribution. Using mostly examples from the Wall Street Journal Corpus, we show that the approach proposed by Carlson and Marcu (2001) leads to irresolvable dilemmas that can be avoided with a suitable treatment of attribution in an underspecified representation of discourse structure. Most approaches to discourse structure assume that textual coherence can be modelled as trees. In particular, it has been shown that coherent discourse follows the so-called rightfrontier constraint (RFC), which essentially ascertains a hierarchical structure without crossed dependencies. We will discuss putative counterexamples to these two assumptions, most of which involve reported speech as in (1) (cited in Wolf and Gibson 2005): (1) “Sure I’ll be polite,” promised one BMW driver who gave his name only as Rudolph. “As long as the trucks and the timid stay out of the left lane.” In (1) the second part of the quote should be linked to the first part (and not the whole first sentence) by a condition relation. If we were to analyse the parenthetical speech reporting clause (“promised one BMW driver ...”) as the nucleus of its host clause (i.e., the quote), the RFC would prevent linkage between the two parts of the quote. If the attribution is analysed as a satellite of the quote, as in Carlson and Marcu (2001), Wolf and Gibson argue, it should be a satellite to both parts of the quote, thus violating treeness. In this paper, we will explore the problems arising from this type of construction and propose a treatment of speech report attributions that we will argue allows us to preserve both, treeness and the RFC in building discourse structures. 1 The (non-)treatment of speech attribution in classic Rhetorical Structure Theory (RST) In ‘classic’ Rhetorical Structure Theory (RST; Mann and Thompson 1988), the problems in accommodating speech report attributions do not arise, because classic RST does not separate complements of verbs and parenthetical speech reporting clauses from their host clause. Leaving speech attribution implicit is in line with the general ‘philosophy’ of RST, which aims to represent not all possible links, but the most plausible structure licensed by 1 Markus Egg is now at Humboldt University, Berlin. This manuscript is a slightly updated (2009) version of our paper in the Proceedings of the Workshop on Constraints in Discourse, Maynooth, Ireland 2006 (http://www.constraints-in-discourse.org/cid06/).",
"title": ""
},
{
"docid": "48f5e3207e1d0a852bc7c922bf285288",
"text": "The language used in tweets from 1,300 different US counties was found to be predictive of the subjective well-being of people living in those counties as measured by representative surveys. Topics, sets of cooccurring words derived from the tweets using LDA, improved accuracy in predicting life satisfaction over and above standard demographic and socio-economic controls (age, gender, ethnicity, income, and education). The LDA topics provide a greater behavioural and conceptual resolution into life satisfaction than the broad socio-economic and demographic variables. For example, tied in with the psychological literature, words relating to outdoor activities, spiritual meaning, exercise, and good jobs correlate with increased life satisfaction, while words signifying disengagement like ’bored’ and ’tired’ show a negative association.",
"title": ""
}
] |
scidocsrr
|
8b517331872863ab211cb98a522d3cc2
|
Lexical and Syntactic cues to identify Reference Scope of Citance
|
[
{
"docid": "565941db0284458e27485d250493fd2a",
"text": "Identifying background (context) information in scientific articles can help scholars understand major contributions in their research area more easily. In this paper, we propose a general framework based on probabilistic inference to extract such context information from scientific papers. We model the sentences in an article and their lexical similarities as aMarkov Random Fieldtuned to detect the patterns that context data create, and employ a Belief Propagationmechanism to detect likely context sentences. We also address the problem of generating surveys of scientific papers. Our experiments show greater pyramid scores for surveys generated using such context information rather than citation sentences alone.",
"title": ""
},
{
"docid": "16de36d6bf6db7c294287355a44d0f61",
"text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-",
"title": ""
},
{
"docid": "2c05a4087aa9bd7da46f24a37f1526e0",
"text": "This paper presents a machine learning system that uses dependency-based features and lexical features for recognizing textual entailment. The proposed system evaluates the feature values automatically. The performance of the proposed system is evaluated by conducting experiments on RTE1, RTE2 and RTE3 datasets. Further, a comparative study of the current system with other ML-based systems for RTE to check the performance of the proposed system is also presented. The dependency-based heuristics and lexical features from the current system have resulted in significant improvement in accuracy over existing state-of-art ML-based solutions for RTE.",
"title": ""
}
] |
[
{
"docid": "a747b503e597ebdb9fd1a32b9dccd04e",
"text": "In this paper, we introduce KAZE features, a novel multiscale 2D feature detection and description algorithm in nonlinear scale spaces. Previous approaches detect and describe features at different scale levels by building or approximating the Gaussian scale space of an image. However, Gaussian blurring does not respect the natural boundaries of objects and smoothes to the same degree both details and noise, reducing localization accuracy and distinctiveness. In contrast, we detect and describe 2D features in a nonlinear scale space by means of nonlinear diffusion filtering. In this way, we can make blurring locally adaptive to the image data, reducing noise but retaining object boundaries, obtaining superior localization accuracy and distinctiviness. The nonlinear scale space is built using efficient Additive Operator Splitting (AOS) techniques and variable conductance diffusion. We present an extensive evaluation on benchmark datasets and a practical matching application on deformable surfaces. Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space, but comparable to SIFT, our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods.",
"title": ""
},
{
"docid": "ee6925a80a6c49fb37181377d7287bb6",
"text": "In two articles Timothy Noakes proposes a new physiological model in which skeletal muscle recruitment is regulated by a central \"govenor,\" specifically to prevent the development of a progressive myocardial ischemia that would precede the development of skeletal muscle anaerobiosis during maximal exercise. In this rebuttal to the Noakes' papers, we argue that Noakes has ignored data supporting the existing hypothesis that under normal conditions cardiac output is limiting maximal aerobic power during dynamic exercise engaging large muscle groups.",
"title": ""
},
{
"docid": "f13000c4870a85e491f74feb20f9b2d4",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "844fa359828628af6006c747a1d5edaa",
"text": "We use deep learning to model interactions across two or more sets of objects, such as user–movie ratings, protein–drug bindings, or ternary useritem-tag interactions. The canonical representation of such interactions is a matrix (or a higherdimensional tensor) with an exchangeability property: the encoding’s meaning is not changed by permuting rows or columns. We argue that models should hence be Permutation Equivariant (PE): constrained to make the same predictions across such permutations. We present a parameter-sharing scheme and prove that it could not be made any more expressive without violating PE. This scheme yields three benefits. First, we demonstrate state-of-the-art performance on multiple matrix completion benchmarks. Second, our models require a number of parameters independent of the numbers of objects, and thus scale well to large datasets. Third, models can be queried about new objects that were not available at training time, but for which interactions have since been observed. In experiments, our models achieved surprisingly good generalization performance on this matrix extrapolation task, both within domains (e.g., new users and new movies drawn from the same distribution used for training) and even across domains (e.g., predicting music ratings after training on movies).",
"title": ""
},
{
"docid": "112b9294f4d606a0112fe80742698184",
"text": "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their ka ma. A set of nodes, called a bank-set , keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing application.",
"title": ""
},
{
"docid": "2de4de4a7b612fd8d87a40780acdd591",
"text": "In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events like TLB misses, L1 and L2 cache misses, by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms make them perform well, which is confirmed by experimental results. *This work was carried out when the author was at the University of Amsterdam, supported by SION grant 612-23-431 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999.",
"title": ""
},
{
"docid": "5ef37c0620e087d3552499e2b9b4fc84",
"text": "A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, “turbo codes.” We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to shed some light on some crucial questions which have been floating around in the communications community since the proposal of turbo codes.",
"title": ""
},
{
"docid": "2d822e022363b371f62a803d79029f09",
"text": "AIM\nTo explore the relationship between sources of stress and psychological burn-out and to consider the moderating and mediating role played sources of stress and different coping resources on burn-out.\n\n\nBACKGROUND\nMost research exploring sources of stress and coping in nursing students construes stress as psychological distress. Little research has considered those sources of stress likely to enhance well-being and, by implication, learning.\n\n\nMETHOD\nA questionnaire was administered to 171 final year nursing students. Questions were asked which measured sources of stress when rated as likely to contribute to distress (a hassle) and rated as likely to help one achieve (an uplift). Support, control, self-efficacy and coping style were also measured, along with their potential moderating and mediating effect on burn-out.\n\n\nFINDINGS\nThe sources of stress likely to lead to distress were more often predictors of well-being than sources of stress likely to lead to positive, eustress states. However, placement experience was an important source of stress likely to lead to eustress. Self-efficacy, dispositional control and support were other important predictors. Avoidance coping was the strongest predictor of burn-out and, even if used only occasionally, it can have an adverse effect on burn-out. Initiatives to promote support and self-efficacy are likely to have the more immediate benefits in enhancing student well-being.\n\n\nCONCLUSION\nNurse educators need to consider how course experiences contribute not just to potential distress but to eustress. How educators interact with their students and how they give feedback offers important opportunities to promote self-efficacy and provide valuable support. Peer support is a critical coping resource and can be bolstered through induction and through learning and teaching initiatives.",
"title": ""
},
{
"docid": "28ad980a6099c12e40fb2e219a552877",
"text": "BACKGROUND\nThe risk factors affecting aortic stenosis (AS) progression are not clearly defined. Insights into this may allow for its secondary prevention.\n\n\nMETHODS AND RESULTS\nWe investigated predictors of AS progression in 170 consecutive patients with AS who had paired echocardiograms > or =3 months (23+/-11) apart. Various clinical, echocardiographic, and biochemical variables were related to the change in aortic valve area (AVA). The annual rate of reduction in AVA was 0.10+/-0.27 cm(2) or 7+/-18% per year. The reduction in AVA per year was significantly related to initial AVA (r = 0.46, P<0.0001), the mean aortic valve gradient (r = 0.27, P = 0.04), left ventricular (LV) outflow tract velocity (r = 0.26, P = 0.001), and LV end-diastolic diameter (r = 0.20, P = 0.04) and marginally to serum creatinine level (r = 0.15, P = 0.08). Patients with a rate of reduction in AVA faster than the mean had higher serum creatinine (P = 0.04) and calcium (P = 0.08) levels. Those with a serum cholesterol level >200 mg/dL had a rate of AVA reduction roughly twice that of those with a lower cholesterol level (P = 0.04). Stepwise multiple regression analysis identified initial AVA, current smoking, and serum calcium level as the independent predictors of amount of AVA reduction per year.\n\n\nCONCLUSIONS\nAbsolute and percentage reduction in AVA per year in those with AS is greater in those with milder degrees of stenosis and is accelerated in the presence of smoking, hypercholesterolemia, and elevated serum creatinine and calcium levels. These findings may have important implications in gaining further insights into the mechanism of AS progression and in formulating strategies to retard this process.",
"title": ""
},
{
"docid": "aec604da03d170b2aa1c67cdae729cf9",
"text": "Plastic debris litters aquatic habitats globally, the majority of which is microscopic (< 1 mm), and is ingested by a large range of species. Risks associated with such small fragments come from the material itself and from chemical pollutants that sorb to it from surrounding water. Hazards associated with the complex mixture of plastic and accumulated pollutants are largely unknown. Here, we show that fish, exposed to a mixture of polyethylene with chemical pollutants sorbed from the marine environment, bioaccumulate these chemical pollutants and suffer liver toxicity and pathology. Fish fed virgin polyethylene fragments also show signs of stress, although less severe than fish fed marine polyethylene fragments. We provide baseline information regarding the bioaccumulation of chemicals and associated health effects from plastic ingestion in fish and demonstrate that future assessments should consider the complex mixture of the plastic material and their associated chemical pollutants.",
"title": ""
},
{
"docid": "d61e481378ee88da7a33cf88bf69dbef",
"text": "Deep neural networks (DNNs) have achieved tremendous success in many tasks of machine learning, such as the image classification. Unfortunately, researchers have shown that DNNs are easily attacked by adversarial examples, slightly perturbed images which can mislead DNNs to give incorrect classification results. Such attack has seriously hampered the deployment of DNN systems in areas where security or safety requirements are strict, such as autonomous cars, face recognition, malware detection. Defensive distillation is a mechanism aimed at training a robust DNN which significantly reduces the effectiveness of adversarial examples generation. However, the state-of-the-art attack can be successful on distilled networks with 100% probability. But it is a white-box attack which needs to know the inner information of DNN. Whereas, the black-box scenario is more general. In this paper, we first propose the -neighborhood attack, which can fool the defensively distilled networks with 100% success rate in the white-box setting, and it is fast to generate adversarial examples with good visual quality. On the basis of this attack, we further propose the regionbased attack against defensively distilled DNNs in the blackbox setting. And we also perform the bypass attack to indirectly break the distillation defense as a complementary method. The experimental results show that our black-box attacks have a considerable success rate on defensively distilled networks.",
"title": ""
},
{
"docid": "73e6f03d67508bd2f04b955fc750c18d",
"text": "Interleaving is a key component of many digital communication systems involving error correction schemes. It provides a form of time diversity to guard against bursts of errors. Recently, interleavers have become an even more integral part of the code design itself, if we consider for example turbo and turbo-like codes. In a non-cooperative context, such as passive listening, it is a challenging problem to estimate the interleaver parameters. In this paper we propose an algorithm that allows us to estimate the parameters of the interleaver at the output of a binary symmetric channel and to locate the codewords in the interleaved block. This gives us some clues about the interleaving function used.",
"title": ""
},
{
"docid": "e3bbff933acaf7d42f91a6a88b43ac13",
"text": "The problem of extracting sentiments from text is a very complex task, in particular due to the significant amount of Natural Language Processing (NLP) required. This task becomes even more difficult when dealing with morphologically rich languages such as Modern Standard Arabic (MSA) and when processing brief, noisy texts such as “tweets” or “Facebook statuses”. This paper highlights key issues researchers are facing and innovative approaches that have been developed when performing subjectivity and sentiment analysis (SSA) on Arabic text in general and Arabic social media text in particular. A preprocessing phase to sentiment analysis is proposed and shown to noticeably improve the results of sentiment extraction from Arabic social media data.",
"title": ""
},
{
"docid": "96bc6ffcc299e7b2221dbb8e2c4349dd",
"text": "At millimeter wave (mmW) frequencies, beamforming and large antenna arrays are an essential requirement to combat the high path loss for mmW communication. Moreover, at these frequencies, very large bandwidths are available t o fulfill the data rate requirements of future wireless networks. However, utilization of these large bandwidths and of large antenna a rrays can result in a high power consumption which is an even bigger concern for mmW receiver design. In a mmW receiver, the analog-to-digital converter (ADC) is generally considered as the most power consuming block. In this paper, primarily focusing on the ADC power, we analyze and compare the total power consumption of the complete analog chain for Analog, Digita l and Hybrid beamforming (ABF, DBF and HBF) based receiver design. We show how power consumption of these beamforming schemes varies with a change in the number of antennas, the number of ADC bits (b) and the bandwidth (B). Moreover, we compare low power (as in [1]) and high power (as in [2]) ADC models, and show that for a certain range of number of antenna s, b and B, DBF may actually have a comparable and lower power consumption than ABF and HBF, respectively. In addition, we also show how the choice of an appropriate beamforming schem e depends on the signal-to-noise ratio regime.",
"title": ""
},
{
"docid": "9c98b0652776a8402979134e753a8b86",
"text": "In this paper, the shielded coil structure using the ferrites and the metallic shielding is proposed. It is compared with the unshielded coil structure (i.e. a pair of circular loop coils only) to demonstrate the differences in the magnetic field distributions and system performance. The simulation results using the 3D Finite Element Analysis (FEA) tool show that it can considerably suppress the leakage magnetic field from 100W-class wireless power transfer (WPT) system with the enhanced system performance.",
"title": ""
},
{
"docid": "751231430c54bf33649e4c4e14d45851",
"text": "The current state of A. D. Baddeley and G. J. Hitch's (1974) multicomponent working memory model is reviewed. The phonological and visuospatial subsystems have been extensively investigated, leading both to challenges over interpretation of individual phenomena and to more detailed attempts to model the processes underlying the subsystems. Analysis of the controlling central executive has proved more challenging, leading to a proposed clarification in which the executive is assumed to be a limited capacity attentional system, aided by a newly postulated fourth system, the episodic buffer. Current interest focuses most strongly on the link between working memory and long-term memory and on the processes allowing the integration of information from the component subsystems. The model has proved valuable in accounting for data from a wide range of participant groups under a rich array of task conditions. Working memory does still appear to be working.",
"title": ""
},
{
"docid": "7ca2d093da7646ff0d69fb3ba9d675ae",
"text": "Advancements in deep learning over the years have attracted research into how deep artificial neural networks can be used in robotic systems. It is on this basis that the following research survey will present a discussion of the applications, gains, and obstacles to deep learning in comparison to physical robotic systems while using modern research as examples. The research survey will present a summarization of the current research with specific focus on the gains and obstacles in comparison to robotics. This will be followed by a primer on discussing how notable deep learning structures can be used in robotics with relevant examples. The next section will show the practical considerations robotics researchers desire to use in regard to deep learning neural networks. Finally, the research survey will show the shortcomings and solutions to mitigate them in addition to discussion of the future trends. The intention of this research is to show how recent advancements in the broader robotics field can inspire additional research in applying deep learning in robotics.",
"title": ""
},
{
"docid": "2ef9dfe08c30fad047803fc1926abe46",
"text": "Modern cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet. Continuous changes occur autonomously and unpredictably, and they are out of control of the cloud provider. Therefore, advanced solutions have to be developed able to dynamically adapt the cloud infrastructure, while providing continuous service and performance guarantees. A number of autonomic computing solutions have been developed such that resources are dynamically allocated among running applications on the basis of short-term demand estimates. However, only performance and energy trade-off have been considered so far with a lower emphasis on the infrastructure dependability/availability which has been demonstrated to be the weakest link in the chain for early cloud providers. The aim of this paper is to fill this literature gap devising resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for cloud end-users.",
"title": ""
},
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
},
{
"docid": "548e1962ac4a2ea36bf90db116c4ff49",
"text": "LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.",
"title": ""
}
] |
scidocsrr
|
3ed33a3b4ad42e23750910e74951cf91
|
An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave
|
[
{
"docid": "fd18cb0cc94b336ff32b29e0f27363dc",
"text": "We have developed a real-time algorithm for detection of the QRS complexes of ECG signals. It reliably recognizes QRS complexes based upon digital analyses of slope, amplitude, and width. A special digital bandpass filter reduces false detections caused by the various types of interference present in ECG signals. This filtering permits use of low thresholds, thereby increasing detection sensitivity. The algorithm automatically adjusts thresholds and parameters periodically to adapt to such ECG changes as QRS morphology and heart rate. For the standard 24 h MIT/BIH arrhythmia database, this algorithm correctly detects 99.3 percent of the QRS complexes.",
"title": ""
}
] |
[
{
"docid": "6bdf8a200a62096e188f7305d364b739",
"text": "The secretion of glucocorticoids (GCs) is a classic endocrine response to stress. Despite that, it remains controversial as to what purpose GCs serve at such times. One view, stretching back to the time of Hans Selye, posits that GCs help mediate the ongoing or pending stress response, either via basal levels of GCs permitting other facets of the stress response to emerge efficaciously, and/or by stress levels of GCs actively stimulating the stress response. In contrast, a revisionist viewpoint posits that GCs suppress the stress response, preventing it from being pathologically overactivated. In this review, we consider recent findings regarding GC action and, based on them, generate criteria for determining whether a particular GC action permits, stimulates, or suppresses an ongoing stress-response or, as an additional category, is preparative for a subsequent stressor. We apply these GC actions to the realms of cardiovascular function, fluid volume and hemorrhage, immunity and inflammation, metabolism, neurobiology, and reproductive physiology. We find that GC actions fall into markedly different categories, depending on the physiological endpoint in question, with evidence for mediating effects in some cases, and suppressive or preparative in others. We then attempt to assimilate these heterogeneous GC actions into a physiological whole.",
"title": ""
},
{
"docid": "adcbc47e18f83745f776dec84d09559f",
"text": "Adaptive and flexible production systems require modular and reusable software especially considering their long-term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companies’ solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self-assessed questionnaire is used to evaluate a large number of companies concerning their software maturity. Secondly, we analyze PLC code, architectural levels, workflows and abilities to configure code automatically out of engineering information in four selected companies. In this paper, the questionnaire results from 16 German world-leading companies in machine and plant manufacturing and four case studies validating the results from the detailed analyses are introduced to prove the applicability of the approach and give a survey of the state of the art in industry. Keywords—factory automation, automated production systems, maturity, modularity, control software, Programmable Logic Controller.",
"title": ""
},
{
"docid": "3da6c20ba154de6fbea24c3cbb9c8ebb",
"text": "The tourism industry is characterized by ever-increasing competition, causing destinations to seek new methods to attract tourists. Traditionally, a decision to visit a destination is interpreted, in part, as a rational calculation of the costs/benefits of a set of alternative destinations, which were derived from external information sources, including e-WOM (word-of-mouth) or travelers' blogs. There are numerous travel blogs available for people to share and learn about travel experiences. Evidence shows, however, that not every blog exerts the same degree of influence on tourists. Therefore, which characteristics of these travel blogs attract tourists' attention and influence their decisions, becomes an interesting research question. Based on the concept of information relevance, a model is proposed for interrelating various attributes specific to blog's content and perceived enjoyment, an intrinsic motivation of information systems usage, to mitigate the above-mentioned gap. Results show that novelty, understandability, and interest of blogs' content affect behavioral intention through blog usage enjoyment. Finally, theoretical and practical implications are proposed. Tourism is a popular activity in modern life and has contributed significantly to economic development for decades. However, competition in almost every sector of this industry has intensified during recent years & Pan, 2008); tourism service providers are now finding it difficult to acquire and keep customers (Echtner & Ritchie, 1991; Ho, 2007). Therefore, methods of attracting tourists to a destination are receiving greater attention from researchers, policy makers, and marketers. Before choosing a destination, tourists may search for information to support their decision-making By understanding the relationships between various information sources' characteristics and destination choice, tourism managers can improve their marketing efforts. Recently, personal blogs have become an important source for acquiring travel information With personal blogs, many tourists can share their travel experiences with others and potential tourists can search for and respond to others' experiences. Therefore, a blog can be seen as an asynchronous and many-to-many channel for conveying travel-related electronic word-of-mouth (e-WOM). By using these forms of inter-personal influence media, companies in this industry can create a competitive advantage (Litvin et al., 2008; Singh et al., 2008). Weblogs are now widely available; therefore, it is not surprising that the quantity of available e-WOM has increased (Xiang & Gret-zel, 2010) to an extent where information overload has become a Empirical evidence , however, indicates that people may not consult numerous blogs for advice; the degree of inter-personal influence varies from blog to blog (Zafiropoulos, 2012). Determining …",
"title": ""
},
{
"docid": "ae87441b3ce5fd388101dc85ad25b558",
"text": "University of Tampere School of Management Author: MIIA HANNOLA Title: Critical factors in Customer Relationship Management system implementation Master’s thesis: 84 pages, 2 appendices Date: November 2016",
"title": ""
},
{
"docid": "edfb50c784e6e7a89ce12d524f667398",
"text": "Unconventional machining processes (communally named advanced or modern machining processes) are widely used by manufacturing industries. These advanced machining processes allow producing complex profiles and high quality-products. However, several process parameters should be optimized to achieve this end. In this paper, the optimization of process parameters of two conventional and four advanced machining processes is investigated: drilling process, grinding process, abrasive jet machining (AJM), abrasive water jet machining (AWJM), ultrasonic machining (USM), and water jet machining (WJM), respectively. This research employed two bio-inspired algorithms called the cuckoo optimization algorithm (COA) and the hoopoe heuristic (HH) to optimize the machining control parameters of these processes. The obtained results are compared with other optimization algorithms described and applied in the literature.",
"title": ""
},
{
"docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7",
"text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.",
"title": ""
},
{
"docid": "68735fb7f8f0485c0e3048fdf156973a",
"text": "Recently, as biometric technology grows rapidly, the importance of fingerprint spoof detection technique is emerging. In this paper, we propose a technique to detect forged fingerprints using contrast enhancement and Convolutional Neural Networks (CNNs). The proposed method detects the fingerprint spoof by performing contrast enhancement to improve the recognition rate of the fingerprint image, judging whether the sub-block of fingerprint image is falsified through CNNs composed of 6 weight layers and totalizing the result. Our fingerprint spoof detector has a high accuracy of 99.8% on average and has high accuracy even after experimenting with one detector in all datasets.",
"title": ""
},
{
"docid": "0b0b313c16697e303522fef245d97ba8",
"text": "The development of novel targeted therapies with acceptable safety profiles is critical to successful cancer outcomes with better survival rates. Immunotherapy offers promising opportunities with the potential to induce sustained remissions in patients with refractory disease. Recent dramatic clinical responses in trials with gene modified T cells expressing chimeric antigen receptors (CARs) in B-cell malignancies have generated great enthusiasm. This therapy might pave the way for a potential paradigm shift in the way we treat refractory or relapsed cancers. CARs are genetically engineered receptors that combine the specific binding domains from a tumor targeting antibody with T cell signaling domains to allow specifically targeted antibody redirected T cell activation. Despite current successes in hematological cancers, we are only in the beginning of exploring the powerful potential of CAR redirected T cells in the control and elimination of resistant, metastatic, or recurrent nonhematological cancers. This review discusses the application of the CAR T cell therapy, its challenges, and strategies for successful clinical and commercial translation.",
"title": ""
},
{
"docid": "18be80cfba8bea4936552e2841946300",
"text": "s the hotel industry grows into a more technologyoriented industry, the need to understand and adopt eprocurement practices becomes more important. This research explores the adoption of e-procurement by the hotels in the Kumasi metropolis, focusing more on hoteliers’ perception of the concept of e-procurement and its importance to the hotel industry. The data was collected through survey from eleven selected hotels in Kumasi Metropolis. Results indicated that hoteliers are using e-procurement but not with the internet but through telephone. Benefits derived from the implementation of e-procurement are quality of goods and services, saving time and cost. However some costs which include buying and fixing of technical equipments, consultation fees for service providers, training of existing staff and recruitment of new staff will incur incorporating e-procurement. It was recommended that hoteliers need to ensure the needed systems and tools are in place for the operationalization of eprocurement.",
"title": ""
},
{
"docid": "bb0653ea1b7c524cce5015da216f527b",
"text": "Literature in knowledge management reveals that there are a number of knowledge sharing (KS) models. A range of models exists as a result of differed view on the subject which is broad and subjective. Some of the models address the antecedents or factors influencing KS whilst the others address the relationship between KS and performance. The benefit of KS cannot be materialized if it does not contribute to organizational attribute. Despite the variety of models, only a few focuses on KS in public sectors. This could resulted from a situation whereby KS is at its infancy in public services.. This paper reviews the existing models of KS and upon a critical review, a KS model which is thought to be suitable for used by public sector in Malaysia is then proposed. The proposed model provides foundation for subsequent research to firstly investigate factors affecting KS in public sector and secondly, seek to identify the relationship between KS, organizational performance and service delivery.",
"title": ""
},
{
"docid": "c3c1ca3e4e05779bccf4247296df0876",
"text": "Intramedullary nailing is one of the most convenient biological options for treating distal femoral fractures. Because the distal medulla of the femur is wider than the middle diaphysis and intramedullary nails cannot completely fill the intramedullary canal, intramedullary nailing of distal femoral fractures can be difficult when trying to obtain adequate reduction. Some different methods exist for achieving reduction. The purpose of this study was determine whether the use of blocking screws resolves varus or valgus and translation and recurvatum deformities, which can be encountered in antegrade and retrograde intramedullary nailing. Thirty-four patients with distal femoral fractures underwent intramedullary nailing between January 2005 and June 2011. Fifteen patients treated by intramedullary nailing and blocking screws were included in the study. Six patients had distal diaphyseal fractures and 9 had distal diaphyseo-metaphyseal fractures. Antegrade nailing was performed in 7 patients and retrograde nailing was performed in 8. Reduction during surgery and union during follow-up were achieved in all patients with no significant complications. Mean follow-up was 26.6 months. Mean time to union was 12.6 weeks. The main purpose of using blocking screws is to achieve reduction, but they are also useful for maintaining permanent reduction. When inserting blocking screws, the screws must be placed 1 to 3 cm away from the fracture line to avoid from propagation of the fracture. When applied properly and in an adequate way, blocking screws provide an efficient solution for deformities encountered during intramedullary nailing of distal femur fractures.",
"title": ""
},
{
"docid": "a2ddfa2d72946dcd504cbc50409c28fd",
"text": "This paper describes a method to perform face pose estimation and high resolution facial feature extraction on the basis of stereoscopic color images. Unlike other approaches no light projection is required at running time. In our method face detection is based on color driven clustering of 3D points derived from stereo. A mesh model is registered with the post-processed face cluster using a variant of the iterative closest point algorithm (ICP). Pose is derived from correspondence. Then, pose and model information is used for face normalization and facial feature localization. Results show, stereo and color are powerful cues for finding the face and its pose under a wide range of poses, illuminations and expressions (PIE). Head orientation may vary in out of plane rotations up to plusmn45deg",
"title": ""
},
{
"docid": "91ed0637e0533801be8b03d5ad21d586",
"text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.",
"title": ""
},
{
"docid": "327450c9470de1254ecc209afcd8addb",
"text": "Intra-individual performance variability may be an important index of the efficiency with which executive control processes are implemented, Lesion studies suggest that damage to the frontal lobes is accompanied by an increase in such variability. Here we sought for the first time to investigate how the functional neuroanatomy of executive control is modulated by performance variability in healthy subjects by using an event-related functional magnetic resonance imaging (ER-fMRI) design and a Go/No-go response inhibition paradigm. Behavioural results revealed that individual differences in Go response time variability were a strong predictor of inhibitory success and that differences in mean Go response time could not account for this effect. Task-related brain activation was positively correlated with intra-individual variability within a distributed inhibitory network consisting of bilateral middle frontal areas and right inferior parietal and thalamic regions. Both the behavioural and fMRI data are consistent with the interpretation that those subjects with relatively higher intra-individual variability activate inhibitory regions to a greater extent, perhaps reflecting a greater requirement for top-down executive control in this group, a finding that may be relevant to disorders of executive/attentional control.",
"title": ""
},
{
"docid": "52237ca8bf4168e444383f2c3813d009",
"text": "This article develops a model of a project as a payoff function that depends on the state of the world and the choice of a sequence of actions. A causal mapping, which may be incompletely known by the project team, represents the impact of possible actions on the states of the world. An underlying probability space represents available information about the state of the world. Interactions among actions and states of the world determine the complexity of the payoff function. Activities are endogenous, in that they are the result of a policy that maximizes the expected project payoff. A key concept is the adequacy of the available information about states of the world and action effects. We express uncertainty, ambiguity, and complexity in terms of information adequacy. We identify three fundamental project management strategies: instructionism, learning, and selectionism. We show that classic project management methods emphasize adequate information and instructionism, and demonstrate how modern methods fit into the three fundamental strategies. The appropriate strategy is contingent on the type of uncertainty present and the complexity of the project payoff function. Our model establishes a rigorous language that allows the project manager to judge the adequacy of the available project information at the outset, choose an appropriate combination of strategies, and set a supporting project infrastructure—that is, systems for planning, coordination and incentives, and monitoring. (Project Management; Uncertainty; Complexity; Instructionalism; Project Selection; Ambiguity )",
"title": ""
},
{
"docid": "67c308f55be23d0faa4a53f71e551763",
"text": "STUDY OBJECTIVES\nTo examine and compare the efficacy and safety of salmeterol xinafoate, a long-acting inhaled beta2-adrenergic agonist, with inhaled ipratropium bromide and inhaled placebo in patients with COPD.\n\n\nDESIGN\nA stratified, randomized, double-blind, double-dummy, placebo-controlled, parallel group clinical trial.\n\n\nSETTING\nMultiple sites at clinics and university medical centers throughout the United States.\n\n\nPATIENTS\nFour hundred eleven symptomatic patients with COPD with FEV1 < or = 65% predicted and no clinically significant concurrent disease.\n\n\nINTERVENTIONS\nComparison of inhaled salmeterol (42 microg twice daily), inhaled ipratropium bromide (36 microg four times a day), and inhaled placebo (2 puffs four times a day) over 12 weeks.\n\n\nRESULTS\nSalmeterol xinafoate was significantly (p < 0.0001) better than placebo and ipratropium in improving lung function at the recommended doses over the 12-week trial. Both salmeterol and ipratropium reduced dyspnea related to activities of daily living compared with placebo; this improvement was associated with reduced use of supplemental albuterol. Analyses of time to first COPD exacerbation revealed salmeterol to be superior to placebo and ipratropium (p < 0.05). Adverse effects were similar among the three treatments.\n\n\nCONCLUSIONS\nThese collective data support the use of salmeterol as first-line bronchodilator therapy for the long-term treatment of airflow obstruction in patients with COPD.",
"title": ""
},
{
"docid": "d763947e969ade3c54c18f0b792a0f7b",
"text": "Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.",
"title": ""
},
{
"docid": "1b4a97df029e45e8d4cf8b8c548c420a",
"text": "Today, online social networks have become powerful tools for the spread of information. They facilitate the rapid and large-scale propagation of content and the consequences of an information -- whether it is favorable or not to someone, false or true -- can then take considerable proportions. Therefore it is essential to provide means to analyze the phenomenon of information dissemination in such networks. Many recent studies have addressed the modeling of the process of information diffusion, from a topological point of view and in a theoretical perspective, but we still know little about the factors involved in it. With the assumption that the dynamics of the spreading process at the macroscopic level is explained by interactions at microscopic level between pairs of users and the topology of their interconnections, we propose a practical solution which aims to predict the temporal dynamics of diffusion in social networks. Our approach is based on machine learning techniques and the inference of time-dependent diffusion probabilities from a multidimensional analysis of individual behaviors. Experimental results on a real dataset extracted from Twitter show the interest and effectiveness of the proposed approach as well as interesting recommendations for future investigation.",
"title": ""
},
{
"docid": "69b3275cb4cae53b3a8888e4fe7f85f7",
"text": "In this paper we propose a way to improve the K-SVD image denoising algorithm. The suggested method aims to reduce the gap that exists between the local processing (sparse-coding of overlapping patches) and the global image recovery (obtained by averaging the overlapping patches). Inspired by game-theory ideas, we define a disagreement-patch as the difference between the intermediate locally denoised patch and its corresponding part in the final outcome. Our algorithm iterates the denoising process several times, applied on modified patches. Those are obtained by subtracting the disagreement-patches from their corresponding input noisy ones, thus pushing the overlapping patches towards an agreement. Experimental results demonstrate the improvement this algorithm leads to.",
"title": ""
}
] |
scidocsrr
|
ed567cb2d0bfb0679ea2aa78d4b97de3
|
Comparison of circular and rectangular coil transformer parameters for wireless Power Transfer based on Finite Element Analysis
|
[
{
"docid": "5eb9c6540de63be3e7c645286f263b4d",
"text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.",
"title": ""
}
] |
[
{
"docid": "a7187fe4496db8a5ea4a5c550c9167a3",
"text": "We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [17] in several ways. In particular, we introduce a bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs to reduce vertex reaches. Our modifications greatly improve both preprocessing and query times. The resulting algorithm is as fast as the best previous method, due to Sanders and Schultes [28]. However, our algorithm is simpler and combines in a natural way with A search, which yields significantly better query times.",
"title": ""
},
{
"docid": "ca600a96bd895537af760efccb30c776",
"text": "This paper emphasizes the importance of Data Mining classification algorithms in predicting the vehicle collision patterns occurred in training accident data set. This paper is aimed at deriving classification rules which can be used for the prediction of manner of collision. The classification algorithms viz. C4.5, C-RT, CS-MC4, Decision List, ID3, Naïve Bayes and RndTree have been applied in predicting vehicle collision patterns. The road accident training data set obtained from the Fatality Analysis Reporting System (FARS) which is available in the University of Alabama’s Critical Analysis Reporting Environment (CARE) system. The experimental results indicate that RndTree classification algorithm achieved better accuracy than other algorithms in classifying the manner of collision which increases fatality rate in road accidents. Also the feature selection algorithms including CFS, FCBF, Feature Ranking, MIFS and MODTree have been explored to improve the classifier accuracy. The result shows that the Feature Ranking method significantly improved the accuracy of the classifiers.",
"title": ""
},
{
"docid": "0f42ee3de2d64956fc8620a2afc20f48",
"text": "In 4 experiments, the authors addressed the mechanisms by which grammatical gender (in Italian and German) may come to affect meaning. In Experiments 1 (similarity judgments) and 2 (semantic substitution errors), the authors found Italian gender effects for animals but not for artifacts; Experiment 3 revealed no comparable effects in German. These results suggest that gender effects arise as a generalization from an established association between gender of nouns and sex of human referents, extending to nouns referring to sexuated entities. Across languages, such effects are found when the language allows for easy mapping between gender of nouns and sex of human referents (Italian) but not when the mapping is less transparent (German). A final experiment provided further constraints: These effects during processing arise at a lexical-semantic level rather than at a conceptual level.",
"title": ""
},
{
"docid": "8b6116105914e3d912d4594b875e443b",
"text": "Patients with neuropathic pain (NP) are challenging to manage and evidence-based clinical recommendations for pharmacologic management are needed. Systematic literature reviews, randomized clinical trials, and existing guidelines were evaluated at a consensus meeting. Medications were considered for recommendation if their efficacy was supported by at least one methodologically-sound, randomized clinical trial (RCT) demonstrating superiority to placebo or a relevant comparison treatment. Recommendations were based on the amount and consistency of evidence, degree of efficacy, safety, and clinical experience of the authors. Available RCTs typically evaluated chronic NP of moderate to severe intensity. Recommended first-line treatments include certain antidepressants (i.e., tricyclic antidepressants and dual reuptake inhibitors of both serotonin and norepinephrine), calcium channel alpha2-delta ligands (i.e., gabapentin and pregabalin), and topical lidocaine. Opioid analgesics and tramadol are recommended as generally second-line treatments that can be considered for first-line use in select clinical circumstances. Other medications that would generally be used as third-line treatments but that could also be used as second-line treatments in some circumstances include certain antiepileptic and antidepressant medications, mexiletine, N-methyl-D-aspartate receptor antagonists, and topical capsaicin. Medication selection should be individualized, considering side effects, potential beneficial or deleterious effects on comorbidities, and whether prompt onset of pain relief is necessary. To date, no medications have demonstrated efficacy in lumbosacral radiculopathy, which is probably the most common type of NP. Long-term studies, head-to-head comparisons between medications, studies involving combinations of medications, and RCTs examining treatment of central NP are lacking and should be a priority for future research.",
"title": ""
},
{
"docid": "7882226d49d9d932ddda38c428cd8f63",
"text": "This paper outlines a framework for Internet banking security using multi-layered, feed-forward artificial neural networks. Such applications utilise anomaly detection techniques which can be applied for transaction authentication and intrusion detection within Internet banking security architectures. Such fraud 'detection' strategies have the potential to significantly limit present levels of financial fraud in comparison to existing fraud 'prevention' techniques",
"title": ""
},
{
"docid": "ab77b0bcfe10b0e322a917059ec112c2",
"text": "HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for video streaming services. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate level to be dynamically requested, based on the current network conditions. It has been shown that today’s heuristics underperform when multiple clients consume video at the same time, due to fairness issues among clients. Concretely, this means that different clients negatively influence each other as they compete for shared network resources. In this article, we propose a novel rate adaptation algorithm called FINEAS (Fair In-Network Enhanced Adaptive Streaming), capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting. A key element of this approach is an in-network system of coordination proxies in charge of facilitating fair resource sharing among clients. The strength of this approach is threefold. First, fairness is achieved without explicit communication among clients and thus no significant overhead is introduced into the network. Second, the system of coordination proxies is transparent to the clients, that is, the clients do not need to be aware of its presence. Third, the HAS principle is maintained, as the in-network components only provide the clients with new information and suggestions, while the rate adaptation decision remains the sole responsibility of the clients themselves. We evaluate this novel approach through simulations, under highly variable bandwidth conditions and in several multiclient scenarios. We show how the proposed approach can improve fairness up to 80% compared to state-of-the-art HAS heuristics in a scenario with three networks, each containing 30 clients streaming video at the same time.",
"title": ""
},
{
"docid": "6b89b946b5b3dd2ca79523495a82c961",
"text": "Mixed fermentations using controlled inoculation of Saccharomyces cerevisiae starter cultures and non-Saccharomyces yeasts represent a feasible way towards improving the complexity and enhancing the particular and specific characteristics of wines. The profusion of selected starter cultures has allowed the more widespread use of inoculated fermentations, with consequent improvements to the control of the fermentation process, and the use of new biotechnological processes in winemaking. Over the last few years, as a consequence of the re-evaluation of the role of non-Saccharomyces yeasts in winemaking, there have been several studies that have evaluated the use of controlled mixed fermentations using Saccharomyces and different non-Saccharomyces yeast species from the wine environment. The combined use of different species often results in unpredictable compounds and/or different levels of fermentation products being produced, which can affect both the chemical and the aromatic composition of wines. Moreover, possible synergistic interactions between different yeasts might provide a tool for the implementation of new fermentation technologies. Thus, knowledge of the Saccharomyces and non-Saccharomyces wine yeast interactions during wine fermentation needs to be improved. To reach this goal, further investigations into the genetic and physiological background of such non-Saccharomyces wine yeasts are needed, so as to apply '-omics' approaches to mixed culture fermentations.",
"title": ""
},
{
"docid": "ff6420335374291508063663acb9dbe6",
"text": "Many people are exposed to loss or potentially traumatic events at some point in their lives, and yet they continue to have positive emotional experiences and show only minor and transient disruptions in their ability to function. Unfortunately, because much of psychology's knowledge about how adults cope with loss or trauma has come from individuals who sought treatment or exhibited great distress, loss and trauma theorists have often viewed this type of resilience as either rare or pathological. The author challenges these assumptions by reviewing evidence that resilience represents a distinct trajectory from the process of recovery, that resilience in the face of loss or potential trauma is more common than is often believed, and that there are multiple and sometimes unexpected pathways to resilience.",
"title": ""
},
{
"docid": "e0fd648da901ed99ddbed3457bc83cfe",
"text": "This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.",
"title": ""
},
{
"docid": "cb62164bc5a582be0c45df28d8ebb797",
"text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.",
"title": ""
},
{
"docid": "0bb6e496cd176e85fcec98bed669e18d",
"text": "Men and women clearly differ in some psychological domains. A. H. Eagly (1995) shows that these differences are not artifactual or unstable. Ideally, the next scientific step is to develop a cogent explanatory framework for understanding why the sexes differ in some psychological domains and not in others and for generating accurate predictions about sex differences as yet undiscovered. This article offers a brief outline of an explanatory framework for psychological sex differences--one that is anchored in the new theoretical paradigm of evolutionary psychology. Men and women differ, in this view, in domains in which they have faced different adaptive problems over human evolutionary history. In all other domains, the sexes are predicted to be psychologically similar. Evolutionary psychology jettisons the false dichotomy between biology and environment and provides a powerful metatheory of why sex differences exist, where they exist, and in what contexts they are expressed (D. M. Buss, 1995).",
"title": ""
},
{
"docid": "a45109840baf74c61b5b6b8f34ac81d5",
"text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.",
"title": ""
},
{
"docid": "adb3180b9750cf0110f4a2edc781fc27",
"text": "A Winograd schema is a pair of sentences that differ in a single word and that contain an ambiguous pronoun whose referent is different in the two sentences and requires the use of commonsense knowledge or world knowledge to disambiguate. This paper discusses how Winograd schemas and other sentence pairs could be used as challenges for machine translation using distinctions between pronouns, such as gender, that appear in the target language but not in the source. A Winograd schema (Levesque, Davis, and Morgenstern 2012) is a pair of sentences, or of short texts, called the elements of the schema, that satisfy the following constraints: 1. The two elements are identical, except for a single word or two or three consecutive words. 2. Each element contains a pronoun. There are at least two noun phrases in the element that, grammatically, could be the antecedents of this pronoun. However, a human reader will reliably choose one of these as plausible and reject the rest as implausible. Thus, for a human reader, the resolution of the pronoun in each element is unambiguous. 3. The correct resolution of the pronoun is different in the two sentences. 4. Computationally simple strategies, such as those based on single word associations in text corpora or selectional restrictions, will not suffice to disambiguate either element. Rather, both disambiguation require some amount of world knowledge and of commonsense reasoning. The following is an example of a Winograd schema: A. The trophy doesn't fit in the brown suitcase because it's too large. B. The trophy doesn't fit in the brown suitcase because it's too small.",
"title": ""
},
{
"docid": "b76f6011edb583c2e0ff21cdbb35aba9",
"text": "User stories are a widely adopted requirements notation in agile development. Yet, user stories are too often poorly written in practice and exhibit inherent quality defects. Triggered by this observation, we propose the Quality User Story (QUS) framework, a set of 13 quality criteria that user story writers should strive to conform to. Based on QUS, we present the Automatic Quality User Story Artisan (AQUSA) software tool. Relying on natural language processing (NLP) techniques, AQUSA detects quality defects and suggest possible remedies. We describe the architecture of AQUSA, its implementation, and we report on an evaluation that analyzes 1023 user stories obtained from 18 software companies. Our tool does not yet reach the ambitious 100 % recall that Daniel Berry and colleagues require NLP tools for RE to achieve. However, we obtain promising results and we identify some improvements that will substantially improve recall and precision.",
"title": ""
},
{
"docid": "26a6ba8cba43ddfd3cac0c90750bf4ad",
"text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.",
"title": ""
},
{
"docid": "39e3056acbebeed983278c7eb2eca73f",
"text": "Various deep learning models have recently been applied to predictive modeling of Electronic Health Records (EHR). In medical claims data, which is a particular type of EHR data, each patient is represented as a sequence of temporally ordered irregularly sampled visits to health providers, where each visit is recorded as an unordered set of medical codes specifying patient's diagnosis and treatment provided during the visit. Based on the observation that different patient conditions have different temporal progression patterns, in this paper we propose a novel interpretable deep learning model, called Timeline. The main novelty of Timeline is that it has a mechanism that learns time decay factors for every medical code. This allows the Timeline to learn that chronic conditions have a longer lasting impact on future visits than acute conditions. Timeline also has an attention mechanism that improves vector embeddings of visits. By analyzing the attention weights and disease progression functions of Timeline, it is possible to interpret the predictions and understand how risks of future visits change over time. We evaluated Timeline on two large-scale real world data sets. The specific task was to predict what is the primary diagnosis category for the next hospital visit given previous visits. Our results show that Timeline has higher accuracy than the state of the art deep learning models based on RNN. In addition, we demonstrate that time decay factors and attentions learned by Timeline are in accord with the medical knowledge and that Timeline can provide a useful insight into its predictions.",
"title": ""
},
{
"docid": "333b15d94a2108929a8f6c18ef460ff4",
"text": "Inferring the latent emotive content of a narrative requires consideration of para-linguistic cues (e.g. pitch), linguistic content (e.g. vocabulary) and the physiological state of the narrator (e.g. heart-rate). In this study we utilized a combination of auditory, text, and physiological signals to predict the mood (happy or sad) of 31 narrations from subjects engaged in personal story-telling. We extracted 386 audio and 222 physiological features (using the Samsung Simband) from the data. A subset of 4 audio, 1 text, and 5 physiologic features were identified using Sequential Forward Selection (SFS) for inclusion in a Neural Network (NN). These features included subject movement, cardiovascular activity, energy in speech, probability of voicing, and linguistic sentiment (i.e. negative or positive). We explored the effects of introducing our selected features at various layers of the NN and found that the location of these features in the network topology had a significant impact on model performance. To ensure the real-time utility of the model, classification was performed over 5 second intervals. We evaluated our model’s performance using leave-one-subject-out crossvalidation and compared the performance to 20 baseline models and a NN with all features included in the input layer.",
"title": ""
},
{
"docid": "1f0796219eaf350fd0e288e22165017d",
"text": "Behçet’s disease, also known as the Silk Road Disease, is a rare systemic vasculitis disorder of unknown etiology. Recurrent attacks of acute inflammation characterize Behçet’s disease. Frequent oral aphthous ulcers, genital ulcers, skin lesions and ocular lesions are the most common manifestations. Inflammation is typically self-limiting in time and relapsing episodes of clinical manifestations represent a hallmark of Behçet’s disease. Other less frequent yet severe manifestations that have a major prognostic impact involve the eyes, the central nervous system, the main large vessels and the gastrointestinal tract. Behçet’s disease has a heterogeneous onset and is associated with significant morbidity and premature mortality. This study presents a current immunological review of the disease and provides a synopsis of clinical aspects and treatment options.",
"title": ""
},
{
"docid": "8b0ac11c05601e93557fe0d5097b4529",
"text": "We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be \"target earners,\" contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of $1.38/hour. We discuss how to use our calibrated model in applications.",
"title": ""
},
{
"docid": "c3a46af77d5ed8fbb68d71cf15fe68cd",
"text": "......................................................................................................................... IV",
"title": ""
}
] |
scidocsrr
|
a86a1562c83d1ccc5d67ffdfa436aad1
|
Religious affiliation and suicide attempt.
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] |
[
{
"docid": "f90471c5c767c40be52c182055c9ebca",
"text": "Deep intracranial tumor removal can be achieved if the neurosurgical robot has sufficient flexibility and stability. Toward achieving this goal, we have developed a spring-based continuum robot, namely a minimally invasive neurosurgical intracranial robot (MINIR-II) with novel tendon routing and tunable stiffness for use in a magnetic resonance imaging (MRI) environment. The robot consists of a pair of springs in parallel, i.e., an inner interconnected spring that promotes flexibility with decoupled segment motion and an outer spring that maintains its smooth curved shape during its interaction with the tissue. We propose a shape memory alloy (SMA) spring backbone that provides local stiffness control and a tendon routing configuration that enables independent segment locking. In this paper, we also present a detailed local stiffness analysis of the SMA backbone and model the relationship between the resistive force at the robot tip and the tension in the tendon. We also demonstrate through experiments, the validity of our local stiffness model of the SMA backbone and the correlation between the tendon tension and the resistive force. We also performed MRI compatibility studies of the three-segment MINIR-II robot by attaching it to a robotic platform that consists of SMA spring actuators with integrated water cooling modules.",
"title": ""
},
{
"docid": "33113cfe0186d1b9ecb379d718c61b7a",
"text": "We propose Guided Zoom, an approach that utilizes spatial grounding to make more informed predictions. It does so by making sure the model has “the right reasons” for a prediction, being defined as reasons that are coherent with those used to make similar correct decisions at training time. The reason/evidence upon which a deep neural network makes a prediction is defined to be the spatial grounding, in the pixel space, for a specific class conditional probability in the model output. Guided Zoom questions how reasonable the evidence used to make a prediction is. In state-of-the-art deep single-label classification models, the top-k (k = 2, 3, 4, . . . ) accuracy is usually significantly higher than the top-1 accuracy. This is more evident in fine-grained datasets, where differences between classes are quite subtle. We show that Guided Zoom results in the refinement of a model’s classification accuracy on three finegrained classification datasets. We also explore the complementarity of different grounding techniques, by comparing their ensemble to an adversarial erasing approach that iteratively reveals the next most discriminative evidence.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "0d842651bb39171815625bfb606d2ba6",
"text": "The parallel coordinates technique is widely used for the analysis of multivariate data. During recent decades significant research efforts have been devoted to exploring the applicability of the technique and to expand upon it, resulting in a variety of extensions. Of these many research activities, a surprisingly small number concerns user-centred evaluations investigating actual use and usability issues for different tasks, data and domains. The result is a clear lack of convincing evidence to support and guide uptake by users as well as future research directions. To address these issues this paper contributes a thorough literature survey of what has been done in the area of user-centred evaluation of parallel coordinates. These evaluations are divided into four categories based on characterization of use, derived from the survey. Based on the data from the survey and the categorization combined with the authors' experience of working with parallel coordinates, a set of guidelines for future research directions is proposed.",
"title": ""
},
{
"docid": "4ab38550da5d9de4064d907248078a59",
"text": "The study of culture and self casts psychology's understanding of the self, identity, or agency as central to the analysis and interpretation of behavior and demonstrates that cultures and selves define and build upon each other in an ongoing cycle of mutual constitution. In a selective review of theoretical and empirical work, we define self and what the self does, define culture and how it constitutes the self (and vice versa), define independence and interdependence and determine how they shape psychological functioning, and examine the continuing challenges and controversies in the study of culture and self. We propose that a self is the \"me\" at the center of experience-a continually developing sense of awareness and agency that guides actions and takes shape as the individual, both brain and body, becomes attuned to various environments. Selves incorporate the patterning of their various environments and thus confer particular and culture-specific form and function to the psychological processes they organize (e.g., attention, perception, cognition, emotion, motivation, interpersonal relationship, group). In turn, as selves engage with their sociocultural contexts, they reinforce and sometimes change the ideas, practices, and institutions of these environments.",
"title": ""
},
{
"docid": "f0f17b4d7bf858e84ed12d0f5f309d4e",
"text": "KEY CLINICAL MESSAGE\nPatient complained of hearing loss and tinnitus after the onset of Reiter's syndrome. Audiometry confirmed the hearing loss on the left ear; blood work showed increased erythrocyte sedimentation rate and C3 fraction of the complement. Genotyping for HLA-B27 was positive. Treatment with prednisolone did not improve the hearing levels.",
"title": ""
},
{
"docid": "b44b177f50402015e343e78afe4d7523",
"text": "A design of a novel wireless implantable blood pressure sensing microsystem for advanced biological research is presented. The system employs a miniature instrumented elastic cuff, wrapped around a blood vessel, for small laboratory animal real-time blood pressure monitoring. The elastic cuff is made of biocompatible soft silicone material by a molding process and is filled by insulating silicone oil with an immersed MEMS capacitive pressure sensor interfaced with low-power integrated electronic system. This technique avoids vessel penetration and substantially minimizes vessel restriction due to the soft cuff elasticity, and is thus attractive for long-term implant. The MEMS pressure sensor detects the coupled blood pressure waveform caused by the vessel expansion and contraction, followed by amplification, 11-bit digitization, and wireless FSK data transmission to an external receiver. The integrated electronics are designed with capability of receiving RF power from an external power source and converting the RF signal to a stable 2 V DC supply in an adaptive manner to power the overall implant system, thus enabling the realization of stand-alone batteryless implant microsystem. The electronics are fabricated in a 1.5 μm CMOS process and occupy an area of 2 mm × 2 mm. The prototype monitoring cuff is wrapped around the right carotid artery of a laboratory rat to measure real-time blood pressure waveform. The measured in vivo blood waveform is compared with a reference waveform recorded simultaneously using a commercial catheter-tip transducer inserted into the left carotid artery. The two measured waveforms are closely matched with a constant scaling factor. The ASIC is interfaced with a 5-mm-diameter RF powering coil with four miniature surface-mounted components (one inductor and three capacitors) over a thin flexible substrate by bond wires, followed by silicone coating and packaging with the prototype blood pressure monitoring cuff. The overall system achieves a measured average sensitivity of 7 LSB/ mmHg, a nonlinearity less than 2.5% of full scale, and a hysteresis less than 1% of full scale. From noise characterization, a blood vessel pressure change sensing resolution 328 of 1 mmHg can be expected. The system weighs 330 mg, representing an order of magnitude mass reduction compared with state-of-the-art commercial technology.",
"title": ""
},
{
"docid": "abc48ae19e2ea1e1bb296ff0ccd492a2",
"text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also",
"title": ""
},
{
"docid": "aff973bc6789375b6518814dbcfde4d9",
"text": "OBJECTIVE\nThis paper aims to report on the accuracy of estimating sleep stages using a wrist-worn device that measures movement using a 3D accelerometer and an optical pulse photoplethysmograph (PPG).\n\n\nAPPROACH\nOvernight recordings were obtained from 60 adult participants wearing these devices on their left and right wrist, simultaneously with a Type III home sleep testing device (Embletta MPR) which included EEG channels for sleep staging. The 60 participants were self-reported normal sleepers (36 M: 24 F, age = 34 ± 10, BMI = 28 ± 6). The Embletta recordings were scored for sleep stages using AASM guidelines and were used to develop and validate an automated sleep stage estimation algorithm, which labeled sleep stages as one of Wake, Light (N1 or N2), Deep (N3) and REM (REM). Features were extracted from the accelerometer and PPG sensors, which reflected movement, breathing and heart rate variability.\n\n\nMAIN RESULTS\nBased on leave-one-out validation, the overall per-epoch accuracy of the automated algorithm was 69%, with a Cohen's kappa of 0.52 ± 0.14. There was no observable bias to under- or over-estimate wake, light, or deep sleep durations. REM sleep duration was slightly over-estimated by the system. The most common misclassifications were light/REM and light/wake mislabeling.\n\n\nSIGNIFICANCE\nThe results indicate that a reasonable degree of sleep staging accuracy can be achieved using a wrist-worn device, which may be of utility in longitudinal studies of sleep habits.",
"title": ""
},
{
"docid": "a81c87374e7ea9a3066f643ac89bfd2b",
"text": "Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.",
"title": ""
},
{
"docid": "cc85b9708da18ae8ffef0a4285d218fb",
"text": "OBJECTIVE\nThis systematic review addresses the effectiveness of occupational therapy-related interventions for adults with fibromyalgia.\n\n\nMETHOD\nWe examined the literature published between January 2000 and June 2014. A total of 322 abstracts from five databases were reviewed. Forty-two Level I studies met the inclusion criteria. Studies were evaluated primarily with regard to the following outcomes: daily activities, pain, depressive symptoms, fatigue, and sleep.\n\n\nRESULTS\nStrong evidence was found for interventions categorized for this review as cognitive-behavioral interventions; relaxation and stress management; emotional disclosure; physical activity; and multidisciplinary interventions for improving daily living, pain, depressive symptoms, and fatigue. There was limited to no evidence for self-management, and few interventions resulted in better sleep.\n\n\nCONCLUSION\nAlthough the evidence supports interventions within the scope of occupational therapy practice for people with fibromyalgia, few interventions were occupation based.",
"title": ""
},
{
"docid": "c5cc7fc9651ff11d27e08e1910a3bd20",
"text": "An omnidirectional circularly polarized (OCP) antenna operating at 28 GHz is reported and has been found to be a promising candidate for device-to-device (D2D) communications in the next generation (5G) wireless systems. The OCP radiation is realized by systematically integrating electric and magnetic dipole elements into a compact disc-shaped configuration (9.23 mm $^{3} =0.008~\\lambda _{0}^{3}$ at 28 GHz) in such a manner that they are oriented in parallel and radiate with the proper phase difference. The entire antenna structure was printed on a single piece of dielectric substrate using standard PCB manufacturing technologies and, hence, is amenable to mass production. A prototype OCP antenna was fabricated on Rogers 5880 substrate and was tested. The measured results are in good agreement with their simulated values and confirm the reported design concepts. Good OCP radiation patterns were produced with a measured peak realized RHCP gain of 2.2 dBic. The measured OCP overlapped impedance and axial ratio bandwidth was 2.2 GHz, from 26.5 to 28.7 GHz, an 8 % fractional bandwidth, which completely covers the 27.5 to 28.35 GHz band proposed for 5G cellular systems.",
"title": ""
},
{
"docid": "f5405c8fb7ad62d4277837bd7036b0d3",
"text": "Context awareness is one of the important fields in ubiquitous computing. Smart Home, a specific instance of ubiquitous computing, provides every family with opportunities to enjoy the power of hi-tech home living. Discovering that relationship among user, activity and context data in home environment is semantic, therefore, we apply ontology to model these relationships and then reason them as the semantic information. In this paper, we present the realization of smart home’s context-aware system based on ontology. We discuss the current challenges in realizing the ontology context base. These challenges can be listed as collecting context information from heterogeneous sources, such as devices, agents, sensors into ontology, ontology management, ontology querying, and the issue related to environment database explosion.",
"title": ""
},
{
"docid": "f590eac54deff0c65732cf9922db3b93",
"text": "Lichen planus (LP) is a common chronic inflammatory condition that can affect skin and mucous membranes, including the oral mucosa. Because of the anatomic, physiologic and functional peculiarities of the oral cavity, the oral variant of LP (OLP) requires specific evaluations in terms of diagnosis and management. In this comprehensive review, we discuss the current developments in the understanding of the etiopathogenesis, clinical-pathologic presentation, and treatment of OLP, and provide follow-up recommendations informed by recent data on the malignant potential of the disease as well as health economics evaluations.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
},
{
"docid": "d53726710ce73fbcf903a1537f149419",
"text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.",
"title": ""
},
{
"docid": "f890136617ffb55e66b56f9c93eeeeb0",
"text": "We were guided by the Protection Motivation Theory to test the motivational interviewing effects on attitude and intention of obese and overweight women to do regular physical activity. In a randomized controlled trial, we selected using convenience sampling 60 overweight and obese women attending health centres. The women were allocated to 2 groups of 30 receiving a standard weight-control programme or motivational interviewing. All constructs of the theory (perceived susceptibility, severity, self-efficacy and response efficacy) and all anthropometric characteristics (except body mass index) were significantly different between the groups at 3 study times. The strongest predictors of intention to do regular physical exercise were perceived response efficacy and attitude at 2- and 6-months follow-up. We showed that targeting motivational interviewing with an emphasis on Protection Motivation Theory constructs appeared to be beneficial for designing and developing appropriate intervention to improve physical activity status among women with overweight and obesity.",
"title": ""
},
{
"docid": "9c2c8ec160d367762cf43b05c5f10db2",
"text": "Microplastics have been reported in marine environments worldwide. Accurate assessment of quantity and type is therefore needed. Here, we propose new techniques for extracting microplastics from sediment and invertebrate tissue. The method developed for sediments involves a volume reduction of the sample by elutriation, followed by density separation using a high density NaI solution. Comparison of this methods' efficiency to that of a widely used technique indicated that the new method has a considerably higher extraction efficiency. For fibres and granules an increase of 23% and 39% was noted, extraction efficiency of PVC increased by 100%. The second method aimed at extracting microplastics from animal tissues based on chemical digestion. Extraction of microspheres yielded high efficiencies (94-98%). For fibres, efficiencies were highly variable (0-98%), depending on polymer type. The use of these two techniques will result in a more complete assessment of marine microplastic concentrations.",
"title": ""
},
{
"docid": "ee7193740e341a10d839bc9d3180c509",
"text": "Large-scale databases of human activity in social media have captured scientific and policy attention, producing a flood of research and discussion. This paper considers methodological and conceptual challenges for this emergent field, with special attention to the validity and representativeness of social media big data analyses. Persistent issues include the over-emphasis of a single platform, Twitter, sampling biases arising from selection by hashtags, and vague and unrepresentative sampling frames. The sociocultural complexity of user behavior aimed at algorithmic invisibility (such as subtweeting, mock-retweeting, use of “screen captures” for text, etc.) further complicate interpretation of big data social media. Other challenges include accounting for field effects, i.e. broadly consequential events that do not diffuse only through the network under study but affect the whole society. The application of network methods from other fields to the study of human social activity may not always be appropriate. The paper concludes with a call to action on practical steps to improve our analytic capacity in this promising, rapidly-growing field.",
"title": ""
},
{
"docid": "3509f90848c45ad34ebbd30b9d357c29",
"text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.",
"title": ""
}
] |
scidocsrr
|
3ce82168901c47add400e917decd264f
|
1 An Introduction to Conditional Random Fields for Relational Learning
|
[
{
"docid": "346cd0b680f7da2ff8ab3d97a294086c",
"text": "Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (non-local and non-sequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural way by this inference procedure. This paper proposes a novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process. Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling.",
"title": ""
},
{
"docid": "dea3bce3f636c87fad95f255aceec858",
"text": "In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).",
"title": ""
}
] |
[
{
"docid": "5e581fa162c4662ef26450ed24122ccd",
"text": "Article history: Received 6 December 2010 Received in revised form 6 December 2012 Accepted 12 January 2013 Available online 11 February 2013",
"title": ""
},
{
"docid": "8822138c493df786296c02315bea5802",
"text": "Photodefinable Polyimides (PI) and polybenz-oxazoles (PBO) which have been widely used for various electronic applications such as buffer coating, interlayer dielectric and protection layer usually need high temperature cure condition over 300 °C to complete the cyclization and achieve good film properties. In addition, PI and PBO are also utilized recently for re-distribution layer of wafer level package. In this application, lower temperature curability is strongly required in order to prevent the thermal damage of the semi-conductor device and the other packaging material. Then, to meet this requirement, we focused on pre-cyclized polyimide with phenolic hydroxyl groups since this polymer showed the good solubility to aqueous TMAH and there was no need to apply high temperature cure condition. As a result of our study, the positive-tone photodefinable material could be obtained by using DNQ and combination of epoxy cross-linker enabled to enhance the chemical and PCT resistance of the cured film made even at 170 °C. Furthermore, the adhesion to copper was improved probably due to secondary hydroxyl groups which were generated from reacted epoxide groups. In this report, we introduce our concept of novel photodefinable positive-tone polyimide for low temperature cure.",
"title": ""
},
{
"docid": "3f8e8801ca7a8f3c0ed5997d1adc894c",
"text": "This paper presents a method for adaptive fracture propagation in thin sheets. A high-quality triangle mesh is dynamically restructured to adaptively maintain detail wherever it is required by the simulation. These requirements include refining where cracks are likely to either start or advance. Refinement ensures that the stress distribution around the crack tip is well resolved, which is vital for creating highly detailed, realistic crack paths. The dynamic meshing framework allows subsequent coarsening once areas are no longer likely to produce cracking. This coarsening allows efficient simulation by reducing the total number of active nodes and by preventing the formation of thin slivers around the crack path. A local reprojection scheme and a substepping fracture process help to ensure stability and prevent a loss of plasticity during remeshing. By including bending and stretching plasticity models, the method is able to simulate a large range of materials with very different fracture behaviors.",
"title": ""
},
{
"docid": "9c7f9ff55b02bd53e94df004dcc615b9",
"text": "Support Vector Machines (SVM) is among the most popular classification techniques in machine learning, hence designing fast primal SVM algorithms for large-scale datasets is a hot topic in recent years. This paper presents a new L2norm regularized primal SVM solver using Augmented Lagrange Multipliers, with linear computational cost for Lp-norm loss functions. The most computationally intensive steps (that determine the algorithmic complexity) of the proposed algorithm is purely and simply matrix-byvector multiplication, which can be easily parallelized on a multi-core server for parallel computing. We implement and integrate our algorithm into the interfaces and framework of the well-known LibLinear software toolbox. Experiments show that our algorithm is with stable performance and on average faster than the stateof-the-art solvers such as SVM perf , Pegasos and the LibLinear that integrates the TRON, PCD and DCD algorithms.",
"title": ""
},
{
"docid": "a10aa780d9f1a65461ad0874173d8f56",
"text": "OS fingerprinting tries to identify the type and version of a system based on gathered information of a target host. It is an essential step for many subsequent penetration attempts and attacks. Traditional OS fingerprinting depends on banner grabbing schemes or network traffic analysis results to identify the system. These interactive procedures can be detected by intrusion detection systems (IDS) or fooled by fake network packets. In this paper, we propose a new OS fingerprinting mechanism in virtual machine hypervisors that adopt the memory de-duplication technique. Specifically, when multiple memory pages with the same contents occupy only one physical page, their reading and writing access delay will demonstrate some special properties. We use the accumulated access delay to the memory pages that are unique to some specific OS images to derive out whether or not our VM instance and the target VM are using the same OS. The experiment results on VMware ESXi hypervisor with both Windows and Ubuntu Linux OS images show the practicability of the attack. We also discuss the mechanisms to defend against such attacks by the hypervisors and VMs.",
"title": ""
},
{
"docid": "f264d5b90dfb774e9ec2ad055c4ebe62",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "7fb2348fbde9dbef88357cc79ff394c5",
"text": "This paper presents a measurement system with capacitive sensor connected to an open-source electronic platform Arduino Uno. A simple code was modified in the project, which ensures that the platform works as interface for the sensor. The code can be modified and upgraded at any time to fulfill other specific applications. The simulations were carried out in the platform's own environment and the collected data are represented in graphical form. Accuracy of developed measurement platform is 0.1 pF.",
"title": ""
},
{
"docid": "24d3cbcc95ff290b0b598891ad41d44d",
"text": "The optimal power flow (OPF) problem is nonconvex and generally hard to solve. In this paper, we propose a semidefinite programming (SDP) optimization, which is the dual of an equivalent form of the OPF problem. A global optimum solution to the OPF problem can be retrieved from a solution of this convex dual problem whenever the duality gap is zero. A necessary and sufficient condition is provided in this paper to guarantee the existence of no duality gap for the OPF problem. This condition is satisfied by the standard IEEE benchmark systems with 14, 30, 57, 118, and 300 buses as well as several randomly generated systems. Since this condition is hard to study, a sufficient zero-duality-gap condition is also derived. This sufficient condition holds for IEEE systems after small resistance (10-5 per unit) is added to every transformer that originally assumes zero resistance. We investigate this sufficient condition and justify that it holds widely in practice. The main underlying reason for the successful convexification of the OPF problem can be traced back to the modeling of transformers and transmission lines as well as the non-negativity of physical quantities such as resistance and inductance.",
"title": ""
},
{
"docid": "20563a2f75e074fe2a62a5681167bc01",
"text": "The introduction of a new generation of attractive touch screen-based devices raises many basic usability questions whose answers may influence future design and market direction. With a set of current mobile devices, we conducted three experiments focusing on one of the most basic interaction actions on touch screens: the operation of soft buttons. Issues investigated in this set of experiments include: a comparison of soft button and hard button performance; the impact of audio and vibrato-tactile feedback; the impact of different types of touch sensors on use, behavior, and performance; a quantitative comparison of finger and stylus operation; and an assessment of the impact of soft button sizes below the traditional 22 mm recommendation as well as below finger width.",
"title": ""
},
{
"docid": "08c6bd4aae8995a2291e22ccfcf026f2",
"text": "This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.",
"title": ""
},
{
"docid": "dc84e401709509638a1a9e24d7db53e1",
"text": "AIM AND OBJECTIVES\nExocrine pancreatic insufficiency caused by inflammation or pancreatic tumors results in nutrient malfunction by a lack of digestive enzymes and neutralization compounds. Despite satisfactory clinical results with current enzyme therapies, a normalization of fat absorption in patients is rare. An individualized therapy is required that includes high dosage of enzymatic units, usage of enteric coating, and addition of gastric proton pump inhibitors. The key goal to improve this therapy is to identify digestive enzymes with high activity and stability in the gastrointestinal tract.\n\n\nMETHODS\nWe cloned and analyzed three novel ciliate lipases derived from Tetrahymena thermophila. Using highly precise pH-STAT-titration and colorimetric methods, we determined stability and lipolytic activity under physiological conditions in comparison with commercially available porcine and fungal digestive enzyme preparations. We measured from pH 2.0 to 9.0, with different bile salts concentrations, and substrates such as olive oil and fat derived from pig diet.\n\n\nRESULTS\nCiliate lipases CL-120, CL-130, and CL-230 showed activities up to 220-fold higher than Creon, pancreatin standard, and rizolipase Nortase within a pH range from pH 2.0 to 9.0. They are highly active in the presence of bile salts and complex pig diet substrate, and more stable after incubation in human gastric juice compared with porcine pancreatic lipase and rizolipase.\n\n\nCONCLUSIONS\nThe newly cloned and characterized lipases fulfilled all requirements for high activity under physiological conditions. These novel enzymes are therefore promising candidates for an improved enzyme replacement therapy for exocrine pancreatic insufficiency.",
"title": ""
},
{
"docid": "60a92a659fbfe0c81da9a6902e062455",
"text": "Public knowledge of crime and justice is largely derived from the media. This paper examines the influence of media consumption on fear of crime, punitive attitudes and perceived police effectiveness. This research contributes to the literature by expanding knowledge on the relationship between fear of crime and media consumption. This study also contributes to limited research on the media’s influence on punitive attitudes, while providing a much-needed analysis of the relationship between media consumption and satisfaction with the police. Employing OLS regression, the results indicate that respondents who are regular viewers of crime drama are more likely to fear crime. However, the relationship is weak. Furthermore, the results indicate that gender, education, income, age, perceived neighborhood problems and police effectiveness are statistically related to fear of crime. In addition, fear of crime, income, marital status, race, and education are statistically related to punitive attitudes. Finally, age, fear of crime, race, and perceived neighborhood problems are statistically related to perceived police effectiveness.",
"title": ""
},
{
"docid": "c444da1de06518f4b20db3ea99b327da",
"text": "Allowing computation to be performed at the edge of a network, edge computing has been recognized as a promising approach to address some challenges in the cloud computing paradigm, particularly to the delay-sensitive and mission-critical applications like real-time surveillance. Prevalence of networked cameras and smart mobile devices enable video analytics at the network edge. However, human objects detection and tracking are still conducted at cloud centers, as real-time, online tracking is computationally expensive. In this paper, we investigated the feasibility of processing surveillance video streaming at the network edge for real-time, uninterrupted moving human objects tracking. Moving human detection based on Histogram of Oriented Gradients (HOG) and linear Support Vector Machine (SVM) is illustrated for features extraction, and an efficient multi-object tracking algorithm based on Kernelized Correlation Filters (KCF) is proposed. Implemented and tested on Raspberry Pi 3, our experimental results are very encouraging, which validated the feasibility of the proposed approach toward a real-time surveillance solution at the edge of networks.",
"title": ""
},
{
"docid": "ab0d19b1cb4a0f5d283f67df35c304f4",
"text": "OBJECTIVE\nWe compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects.\n\n\nMETHOD\nSixty nine subjects (38 BP and 31 HC), 8-17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory.\n\n\nRESULTS\nBP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety.\n\n\nCONCLUSIONS\nThese findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology.",
"title": ""
},
{
"docid": "97561632e9d87093a5de4f1e4b096df7",
"text": "Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate. Guy Shani Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: guyshani@microsoft.com Asela Gunawardana Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: aselag@microsoft.com",
"title": ""
},
{
"docid": "ba6865dc3c93ac52c9f1050f159b9e1a",
"text": "A review of various properties of ceramic-reinforced aluminium matrix composites is presented in this paper. The properties discussed include microstructural, optical, physical and mechanical behaviour of ceramic-reinforced aluminium matrix composites and effects of reinforcement fraction, particle size, heat treatment and extrusion process on these properties. The results obtained by many researchers indicated the uniform distribution of reinforced particles with localized agglomeration at some places, when the metal matrix composite was processed through stir casting method. The density, hardness, compressive strength and toughness increased with increasing reinforcement fraction; however, these properties may reduce in the presence of porosity in the composite material. The particle size of reinforcements affected the hardness adversely. Tensile strength and flexural strength were observed to be increased up to a certain reinforcement fraction in the composites, beyond which these were reduced. The mechanical properties of the composite materials were improved by either thermal treatment or extrusion process. Initiation and growth of fine microcracks leading to macroscopic failure, ductile failure of the aluminium matrix, combination of particle fracture and particle pull-out, overload failure under tension and brittle fracture were the failure mode and mechanisms, as observed by previous researchers, during fractography analysis of tensile specimens of ceramic-reinforced aluminium matrix composites.",
"title": ""
},
{
"docid": "ae43cf8140bbaf7aa8bc04eceb130fda",
"text": "Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environments for the development and evaluation of new architectures and protocols. Despite the wide applicability of network virtualization, the shared use of routing devices and communication channels leads to a series of security-related concerns. It is necessary to provide protection to virtual network infrastructures in order to enable their use in real, large scale environments. In this paper, we present an overview of the state of the art concerning virtual network security. We discuss the main challenges related to this kind of environment, some of the major threats, as well as solutions proposed in the literature that aim to deal with different security aspects.",
"title": ""
},
{
"docid": "59f29d3795e747bb9cee8fcbf87cb86f",
"text": "This paper introduces the development of a semi-active friction based variable physical damping actuator (VPDA) unit. The realization of this unit aims to facilitate the control of compliant robotic joints by providing physical variable damping on demand assisting on the regulation of the oscillations induced by the introduction of compliance. The mechatronics details and the dynamic model of the damper are introduced. The proposed variable damper mechanism is evaluated on a simple 1-DOF compliant joint linked to the ground through a torsion spring. This flexible connection emulates a compliant joint, generating oscillations when the link is perturbed. Preliminary results are presented to show that the unit and the proposed control scheme are capable of replicating simulated relative damping values with good fidelity.",
"title": ""
},
{
"docid": "0342f89c44e0b86026953196de34b608",
"text": "In this paper, we introduce an approach for recognizing the absence of opposing arguments in persuasive essays. We model this task as a binary document classification and show that adversative transitions in combination with unigrams and syntactic production rules significantly outperform a challenging heuristic baseline. Our approach yields an accuracy of 75.6% and 84% of human performance in a persuasive essay corpus with various topics.",
"title": ""
},
{
"docid": "d829b87eebfb102b25b5070d07f80b5a",
"text": "We describe the use of the social reference management website CiteULike for recommending scientific articles to users, based on their reference library. We test three different collaborative filtering algorithms, and find that user-based filtering performs best. A temporal analysis of the data indexed by CiteULike shows that it takes about two years for the cold-start problem to disappear and recommendation performance to improve.",
"title": ""
}
] |
scidocsrr
|
489ff5cb21cf3abef4c5b53e127428ca
|
MD-Logic Artificial Pancreas System
|
[
{
"docid": "97006c15d2158da060d8aa6caf64a14d",
"text": "A nonlinear model predictive controller has been developed to maintain normoglycemia in subjects with type 1 diabetes during fasting conditions such as during overnight fast. The controller employs a compartment model, which represents the glucoregulatory system and includes submodels representing absorption of subcutaneously administered short-acting insulin Lispro and gut absorption. The controller uses Bayesian parameter estimation to determine time-varying model parameters. Moving target trajectory facilitates slow, controlled normalization of elevated glucose levels and faster normalization of low glucose values. The predictive capabilities of the model have been evaluated using data from 15 clinical experiments in subjects with type 1 diabetes. The experiments employed intravenous glucose sampling (every 15 min) and subcutaneous infusion of insulin Lispro by insulin pump (modified also every 15 min). The model gave glucose predictions with a mean square error proportionally related to the prediction horizon with the value of 0.2 mmol L(-1) per 15 min. The assessment of clinical utility of model-based glucose predictions using Clarke error grid analysis gave 95% of values in zone A and the remaining 5% of values in zone B for glucose predictions up to 60 min (n = 1674). In conclusion, adaptive nonlinear model predictive control is promising for the control of glucose concentration during fasting conditions in subjects with type 1 diabetes.",
"title": ""
}
] |
[
{
"docid": "0f4b5b92995e11fb57cf9305acc7cc4b",
"text": "Purpose: The suppression of motion artefacts from MR images is a challenging task. The purpose of this paper is to develop a standalone novel technique to suppress motion artefacts from MR images using a data-driven deep learning approach. Methods: A deep learning convolutional neural network (CNN) was developed to remove motion artefacts in brain MR images. A CNN was trained on simulated motion corrupted images to identify and suppress artefacts due to the motion. The network was an encoderdecoder CNN architecture where the encoder decomposed the motion corrupted images into a set of feature maps. The feature maps were then combined by the decoder network to generate a motion-corrected image. The network was tested on an unseen simulated dataset and an experimental, motion corrupted in vivo brain dataset. Results: The trained network was able to suppress the motion artefacts in the simulated motion corrupted images, and the mean percentage error in the motion corrected images was 2.69 % with a standard deviation of 0.95 %. The network was able to effectively suppress the motion artefacts from the experimental dataset, demonstrating the generalisation capability of the trained network. Conclusion: A novel and generic motion correction technique has been developed that can suppress motion artefacts from motion corrupted MR images. The proposed technique is a standalone post-processing method that does not interfere with data acquisition or reconstruction parameters, thus making it suitable for a multitude of MR sequences.",
"title": ""
},
{
"docid": "268a86c25f1974630fada777790b162b",
"text": "The paper presents a novel method and system for personalised (individualised) modelling of spatio/spectro-temporal data (SSTD) and prediction of events. A novel evolving spiking neural network reservoir system (eSNNr) is proposed for the purpose. The system consists of: spike-time encoding module of continuous value input information into spike trains; a recurrent 3D SNNr; eSNN as an evolving output classifier. Such system is generated for every new individual, using existing data of similar individuals. Subject to proper training and parameter optimisation, the system is capable of accurate spatiotemporal pattern recognition (STPR) and of early prediction of individual events. The method and the system are generic, applicable to various SSTD and classification and prediction problems. As a case study, the method is applied for early prediction of occurrence of stroke on an individual basis. Preliminary experiments demonstrated a significant improvement in accuracy and time of event prediction when using the proposed method when compared with standard machine learning methods, such as MLR, SVM, MLP. Future development and applications are discussed.",
"title": ""
},
{
"docid": "0c1d6b7bc08a2c292880ced04c74c85d",
"text": "The practice of mindfulness meditation was used in a 10-week Stress Reduction and Relaxation Program to train chronic pain patients in self-regulation. The meditation facilitates an attentional stance towards proprioception known as detached observation. This appears to cause an \"uncoupling \" of the sensory dimension of the pain experience from the affective/evaluative alarm reaction and reduce the experience of suffering via cognitive reappraisal. Data are presented on 51 chronic pain patients who had not improved with traditional medical care. The dominant pain categories were low back, neck and shoulder, and headache. Facial pain, angina pectoris, noncoronary chest pain, and GI pain were also represented. At 10 weeks, 65% of the patients showed a reduction of greater than or equal to 33% in the mean total Pain Rating Index (Melzack) and 50% showed a reduction of greater than or equal to 50%. Similar decreases were recorded on other pain indices and in the number of medical symptoms reported. Large and significant reductions in mood disturbance and psychiatric symptomatology accompanied these changes and were relatively stable on follow-up. These improvements were independent of the pain category. We conclude that this form of meditation can be used as the basis for an effective behavioral program in self-regulation for chronic pain patients. Key features of the program structure, and the limitations of the present uncontrolled study are discussed.",
"title": ""
},
{
"docid": "83e7119065ededfd731855fe76e76207",
"text": "Introduction: In recent years, the maturity model research has gained wide acceptance in the area of information systems and many Service Oriented Architecture (SOA) maturity models have been proposed. However, there are limited empirical studies on in-depth analysis and validation of SOA Maturity Models (SOAMMs). Objectives: The objective is to present a comprehensive comparison of existing SOAMMs to identify the areas of improvement and the research opportunities. Methods: A systematic literature review is conducted to explore the SOA adoption maturity studies. Results: A total of 20 unique SOAMMs are identified and analyzed in detail. A comparison framework is defined based on SOAMM design and usage support. The results provide guidance for SOA practitioners who are involved in selection, design, and implementation of SOAMMs. Conclusion: Although all SOAMMs propose a measurement framework, only a few SOAMMs provide guidance for selecting and prioritizing improvement measures. The current state of research shows that a gap exists in both prescriptive and descriptive purpose of SOAMM usage and it indicates the need for further research.",
"title": ""
},
{
"docid": "55a45d25580b846af3befde8a1e4dea5",
"text": "Data clustering has been received considerable attention in many applications, such as data mining, document retrieval, image segmentation and pattern classification. The enlarging volumes of information emerging by the progress of technology, makes clustering of very large scale of data a challenging task. In order to deal with the problem, many researchers try to design efficient parallel clustering algorithms. In this paper, we propose a parallel k -means clustering algorithm based on MapReduce, which is a simple yet powerful parallel programming technique. The experimental results demonstrate that the proposed algorithm can scale well and efficiently process large datasets on commodity hardware.",
"title": ""
},
{
"docid": "59d5e1aaa986896435d523b46b3a01e7",
"text": "Biodegradable polymeric materials are the most common carriers for use in drug delivery systems. With this trend, newer drug delivery systems using targeted and controlled release polymeric nanoparticles (NPs) are being developed to manipulate their navigation in complex in vivo environment. However, a clear understanding of the interactions between biological systems and these nanoparticulates is still unexplored. Different studies have been performed to correlate the physicochemical properties of polymeric NPs with the biological responses. Size and surface charge are the two fundamental physicochemical properties that provide a key direction to design an effective NP formulation. In this critical review, our goal is to provide a brief overview on the influences of size and surface charge of different polymeric NPs in vitro and to highlight the challenges involved with in vivo trials.",
"title": ""
},
{
"docid": "b6de60455d4a0b2a1fd909a3ec1d7e17",
"text": "We construct infinitely many manifolds admitting both strongly irreducible and weakly reducible minimal genus Heegaard splittings. Both closed manifolds and manifolds with boundary tori are constructed. The pioneering work of Casson and Gordon [1] shows that a minimal genus Heegaard splitting of an irreducible, non-Haken 3-manifold is necessarily strongly irreducible; by contrast, Haken [2] showed that a minimal genus (indeed, any) Heegaard splitting of a composite 3manifold is necessarily reducible, and hence weakly reducible. The following question of Moriah [8] is therefore quite natural: Question 1 ([8], Question 1.2). Can a 3-manifold M have both weakly reducible and strongly irreducible minimal genus Heegaard splittings? We answer this question affirmatively: Theorem 2. There exist infinitely many closed, orientable 3-manifolds of Heegaard genus 3, each admitting both strongly irreducible and weakly reducible minimal genus Heegaard splittings. Theorem 2 is proved in Section 2. In Remark 7 we offer a strategy to generalize Theorem 2 to construct examples of genus g, for each g ≥ 3; it is easy to see that no such examples can exist if g < 3. In Section 3 we give examples of manifolds with one, two or three torus boundary components, each admitting both strongly irreducible and weakly reducible minimal genus Heegaard splittings. Moreover, Date: December 24, 2008. 1991 Mathematics Subject Classification. 57M99, 57M25.",
"title": ""
},
{
"docid": "9cb02161eb65b06f474a8a263bd93d88",
"text": "BACKGROUND\nIdentifying key variables such as disorders within the clinical narratives in electronic health records has wide-ranging applications within clinical practice and biomedical research. Previous research has demonstrated reduced performance of disorder named entity recognition (NER) and normalization (or grounding) in clinical narratives than in biomedical publications. In this work, we aim to identify the cause for this performance difference and introduce general solutions.\n\n\nMETHODS\nWe use closure properties to compare the richness of the vocabulary in clinical narrative text to biomedical publications. We approach both disorder NER and normalization using machine learning methodologies. Our NER methodology is based on linear-chain conditional random fields with a rich feature approach, and we introduce several improvements to enhance the lexical knowledge of the NER system. Our normalization method - never previously applied to clinical data - uses pairwise learning to rank to automatically learn term variation directly from the training data.\n\n\nRESULTS\nWe find that while the size of the overall vocabulary is similar between clinical narrative and biomedical publications, clinical narrative uses a richer terminology to describe disorders than publications. We apply our system, DNorm-C, to locate disorder mentions and in the clinical narratives from the recent ShARe/CLEF eHealth Task. For NER (strict span-only), our system achieves precision=0.797, recall=0.713, f-score=0.753. For the normalization task (strict span+concept) it achieves precision=0.712, recall=0.637, f-score=0.672. The improvements described in this article increase the NER f-score by 0.039 and the normalization f-score by 0.036. We also describe a high recall version of the NER, which increases the normalization recall to as high as 0.744, albeit with reduced precision.\n\n\nDISCUSSION\nWe perform an error analysis, demonstrating that NER errors outnumber normalization errors by more than 4-to-1. Abbreviations and acronyms are found to be frequent causes of error, in addition to the mentions the annotators were not able to identify within the scope of the controlled vocabulary.\n\n\nCONCLUSION\nDisorder mentions in text from clinical narratives use a rich vocabulary that results in high term variation, which we believe to be one of the primary causes of reduced performance in clinical narrative. We show that pairwise learning to rank offers high performance in this context, and introduce several lexical enhancements - generalizable to other clinical NER tasks - that improve the ability of the NER system to handle this variation. DNorm-C is a high performing, open source system for disorders in clinical text, and a promising step toward NER and normalization methods that are trainable to a wide variety of domains and entities. (DNorm-C is open source software, and is available with a trained model at the DNorm demonstration website: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#DNorm.).",
"title": ""
},
{
"docid": "1e9fe0b5da36281a24b1f6580113f5cf",
"text": "The external load of a team-sport athlete can be measured by tracking technologies, including global positioning systems (GPS), local positioning systems (LPS), and vision-based systems. These technologies allow for the calculation of displacement, velocity and acceleration during a match or training session. The accurate quantification of these variables is critical so that meaningful changes in team-sport athlete external load can be detected. High-velocity running, including sprinting, may be important for specific team-sport match activities, including evading an opponent or creating a shot on goal. Maximal accelerations are energetically demanding and frequently occur from a low velocity during team-sport matches. Despite extensive research, conjecture exists regarding the thresholds by which to classify the high velocity and acceleration activity of a team-sport athlete. There is currently no consensus on the definition of a sprint or acceleration effort, even within a single sport. The aim of this narrative review was to examine the varying velocity and acceleration thresholds reported in athlete activity profiling. The purposes of this review were therefore to (1) identify the various thresholds used to classify high-velocity or -intensity running plus accelerations; (2) examine the impact of individualized thresholds on reported team-sport activity profile; (3) evaluate the use of thresholds for court-based team-sports and; (4) discuss potential areas for future research. The presentation of velocity thresholds as a single value, with equivocal qualitative descriptors, is confusing when data lies between two thresholds. In Australian football, sprint efforts have been defined as activity >4.00 or >4.17 m·s-1. Acceleration thresholds differ across the literature, with >1.11, 2.78, 3.00, and 4.00 m·s-2 utilized across a number of sports. It is difficult to compare literature on field-based sports due to inconsistencies in velocity and acceleration thresholds, even within a single sport. Velocity and acceleration thresholds have been determined from physical capacity tests. Limited research exists on the classification of velocity and acceleration data by female team-sport athletes. Alternatively, data mining techniques may be used to report team-sport athlete external load, without the requirement of arbitrary or physiologically defined thresholds.",
"title": ""
},
{
"docid": "36fca3bd6a23b2f99438fe07ec0f0b9f",
"text": "Best management practices (BMPs) have been widely used to address hydrology and water quality issues in both agricultural and urban areas. Increasing numbers of BMPs have been studied in research projects and implemented in watershed management projects, but a gap remains in quantifying their effectiveness through time. In this paper, we review the current knowledge about BMP efficiencies, which indicates that most empirical studies have focused on short-term efficiencies, while few have explored long-term efficiencies. Most simulation efforts that consider BMPs assume constant performance irrespective of ages of the practices, generally based on anticipated maintenance activities or the expected performance over the life of the BMP(s). However, efficiencies of BMPs likely change over time irrespective of maintenance due to factors such as degradation of structures and accumulation of pollutants. Generally, the impacts of BMPs implemented in water quality protection programs at watershed levels have not been as rapid or large as expected, possibly due to overly high expectations for practice long-term efficiency, with BMPs even being sources of pollutants under some conditions and during some time periods. The review of available datasets reveals that current data are limited regarding both short-term and long-term BMP efficiency. Based on this review, this paper provides suggestions regarding needs and opportunities. Existing practice efficiency data need to be compiled. New data on BMP efficiencies that consider important factors, such as maintenance activities, also need to be collected. Then, the existing and new data need to be analyzed. Further research is needed to create a framework, as well as modeling approaches built on the framework, to simulate changes in BMP efficiencies with time. The research community needs to work together in addressing these needs and opportunities, which will assist decision makers in formulating better decisions regarding BMP implementation in watershed management projects.",
"title": ""
},
{
"docid": "708d024f7fccc00dd3961ecc9aca1893",
"text": "Transportation networks play a crucial role in human mobility, the exchange of goods and the spread of invasive species. With 90 per cent of world trade carried by sea, the global network of merchant ships provides one of the most important modes of transportation. Here, we use information about the itineraries of 16 363 cargo ships during the year 2007 to construct a network of links between ports. We show that the network has several features that set it apart from other transportation networks. In particular, most ships can be classified into three categories: bulk dry carriers, container ships and oil tankers. These three categories do not only differ in the ships' physical characteristics, but also in their mobility patterns and networks. Container ships follow regularly repeating paths whereas bulk dry carriers and oil tankers move less predictably between ports. The network of all ship movements possesses a heavy-tailed distribution for the connectivity of ports and for the loads transported on the links with systematic differences between ship types. The data analysed in this paper improve current assumptions based on gravity models of ship movements, an important step towards understanding patterns of global trade and bioinvasion.",
"title": ""
},
{
"docid": "09e573ba5fdb1aff5533442a897f1e2d",
"text": "Subjectivityin natural language refers to aspects of language used to express opinions and evaluations (Banfield, 1982; Wiebe, 1994). There are numerous applications for which knowledge of subjectivity is relevant, including genre detection, information extraction, and information retrieval. This paper shows promising results for a straightforward method of identifying collocational clues of subjectivity, as well as evidence of the usefulness of these clues for recognizing opinionated documents.",
"title": ""
},
{
"docid": "67bc81066dbe06ac615df861435fdbd9",
"text": "When a three-dimensional ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon, the quantum anomalous Hall effect, provides a conceptually new platform for studies of 1D transport, distinct from the traditionally studied quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt-driven crossover from predominantly edge-state transport to diffusive transport in Crx(Bi,Sb)(2-x)Te3 thin films. This crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain by employing a Landauer-Büttiker formalism. Our methodology provides a powerful means of quantifying dissipative effects in temperature and chemical potential regimes far from perfect quantization.",
"title": ""
},
{
"docid": "5320d7790348cc0e48dcf76428811d7b",
"text": "central and, in some ways, most familiar concepts in AI, the most fundamental question about it—What is it?—has rarely been answered directly. Numerous papers have lobbied for one or another variety of representation, other papers have argued for various properties a representation should have, and still others have focused on properties that are important to the notion of representation in general. In this article, we go back to basics to address the question directly. We believe that the answer can best be understood in terms of five important and distinctly different roles that a representation plays, each of which places different and, at times, conflicting demands on the properties a representation should have. We argue that keeping in mind all five of these roles provides a usefully broad perspective that sheds light on some long-standing disputes and can invigorate both research and practice in the field.",
"title": ""
},
{
"docid": "34641057a037740ec28581a798c96f05",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "0929b7711b7209c39495401dffb62307",
"text": "Ethereum is a major blockchain-based platform for smart contracts – Turing complete programs that are executed in a decentralized network and usually manipulate digital units of value. A peer-to-peer network of mutually distrusting nodes maintains a common view of the global state and executes code upon request. The stated is stored in a blockchain secured by a proof-of-work consensus mechanism similar to that in Bitcoin. The core value proposition of Ethereum is a full-featured programming language suitable for implementing complex business logic. Decentralized applications without a trusted third party are appealing in areas like crowdfunding, financial services, identity management, and gambling. Smart contracts are a challenging research topic that spans over areas ranging from cryptography, consensus algorithms, and programming languages to governance, finance, and law. This paper summarizes the state of knowledge in this field. We provide a technical overview of Ethereum, outline open challenges, and review proposed solutions. We also mention alternative smart contract blockchains.",
"title": ""
},
{
"docid": "c8a9aff29f3e420a1e0442ae7caa46eb",
"text": "Four new species of Ixora (Rubiaceae, Ixoreae) from Brazil are described and illustrated and their relationships to morphologically similar species as well as their conservation status are discussed. The new species, Ixora cabraliensis, Ixora emygdioi, Ixora grazielae, and Ixora pilosostyla are endemic to the Atlantic Forest of southern Bahia and Espirito Santo. São descritas e ilustradas quatro novas espécies de Ixora (Rubiaceae, Ixoreae) para o Brasil bem como discutidos o relacionamento morfológico com espécies mais similares e o estado de conservação. As novas espécies Ixora cabraliensis, Ixora emygdioi, Ixora grazielae e Ixora pilosostyla são endêmicas da Floresta Atlântica, no trecho do sul do estado da Bahia e o estado do Espírito Santo.",
"title": ""
},
{
"docid": "a05fdb35110cd940cd6f4f64e950d1d0",
"text": "The vector control system of permanent magnet synchronous motor based on sliding mode observer (SMO) is studied in this paper. On the basis of analyzing the traditional sliding mode observer, an improved sliding mode observer is proposed. Firstly, by using the hyperbolic tangent function instead of the traditional symbol function, the chattering of the system is suppressed. Secondly, a low pass filter which has the variable cutoff frequency along with the rotor speed is designed to reduce the phase delay. Then, by using Kalman filter, it could make back EMF information more smoothly. Finally, in order to obtain accurate position and velocity information, the method of phase-locked loop (PLL) is proposed to estimate the position and speed of the rotor. The simulation results show that the new algorithm can not only improve the accuracy of the position and speed estimation of the rotor but reduce the chattering of the system.",
"title": ""
},
{
"docid": "8a36bdb2cc232ab541715a823625b586",
"text": "Artificial insemination (AI) is an important technique in all domestic species to ensure rapid genetic progress. The use of AI has been reported in camelids although insemination trials are rare. This could be because of the difficulties involved in collecting as well as handling the semen due to the gelatinous nature of the seminal plasma. In addition, as all camelids are induced ovulators, the females need to be induced to ovulate before being inseminated. This paper discusses the different methods for collection of camel semen and describes how the semen concentration and morphology are analyzed. It also examines the use of different buffers for liquid storage of fresh and chilled semen, the ideal number of live sperm to inseminate and whether pregnancy rates are improved if the animal is inseminated at the tip of the uterine horn verses in the uterine body. Various methods to induce ovulation in the female camels are also described as well as the timing of insemination in relation to ovulation. Results show that collection of semen is best achieved using an artificial vagina, and the highest pregnancy rates are obtained if a minimum of 150 × 106 live spermatozoa (diluted in Green Buffer, lactose (11%), or I.N.R.A. 96) are inseminated into the body of the uterus 24 h after the GnRH injection, given to the female camel to induce ovulation. Deep freezing of camel semen is proving to be a great challenge but the use of various freezing protocols, different diluents and different packaging methods (straws verses pellets) will be discussed. Preliminary results indicate that Green and Clear Buffer for Camel Semen is the best diluent to use for freezing dromedary semen and that freezing in pellets rather than straws result in higher post-thaw motility. Preservation of semen by deepfreezing is very important in camelids as it prevents the need to transport animals between farms and it extends the reproductive life span of the male, therefore further work needs to be carried out to improve the fertility of frozen/thawed camel spermatozoa.",
"title": ""
},
{
"docid": "2a3b9c70dc8f80419ba4557752c4e603",
"text": "The proliferation of sensors and mobile devices and their connectedness to the network have given rise to numerous types of situation monitoring applications. Data Stream Management Systems (DSMSs) have been proposed to address the data processing needs of such applications that require collection of high-speed data, computing results on-the-fly, and taking actions in real-time. Although a lot of work appears in the area of DSMS, not much has been done in multilevel secure (MLS) DSMS making the technology unsuitable for highly sensitive applications, such as battlefield monitoring. An MLS–DSMS should ensure the absence of illegal information flow in a DSMS and more importantly provide the performance needed to handle continuous queries. We illustrate why the traditional DSMSs cannot be used for processing multilevel secure continuous queries and discuss various DSMS architectures for processing such queries. We implement one such architecture and demonstrate how it processes continuous queries. In order to provide better quality of service and memory usage in a DSMS, we show how continuous queries submitted by various users can be shared. We provide experimental evaluations to demonstrate the performance benefits achieved through query sharing.",
"title": ""
}
] |
scidocsrr
|
020b996cecdede0b34e881dd8875261c
|
Global-local and trail-making tasks by monolingual and bilingual children: beyond inhibition.
|
[
{
"docid": "89b30e45feda20ad34ec7bef3a877e5d",
"text": "Advanced inhibitory control skills have been found in bilingual speakers as compared to monolingual controls (Bialystok, 1999). We examined whether this effect is generalized to an unstudied language group (Spanish-English bilingual) and multiple measures of executive function by administering a battery of tasks to 50 kindergarten children drawn from three language groups: native bilinguals, monolinguals (English), and English speakers enrolled in second-language immersion kindergarten. Despite having significantly lower verbal scores and parent education/income level, Spanish-English bilingual children's raw scores did not differ from their peers. After statistically controlling for these factors and age, native bilingual children performed significantly better on the executive function battery than both other groups. Importantly, the relative advantage was significant for tasks that appear to call for managing conflicting attentional demands (Conflict tasks); there was no advantage on impulse-control (Delay tasks). These results advance our understanding of both the generalizability and specificity of the compensatory effects of bilingual experience for children's cognitive development.",
"title": ""
}
] |
[
{
"docid": "6fab2f7c340b6edbffe30b061bcd991e",
"text": "A Majority-Inverter Graph (MIG) is a recently introduced logic representation form whose algebraic and Boolean properties allow for efficient logic optimization. In particular, when considering logic depth reduction, MIG algorithms obtained significantly superior synthesis results as compared to the state-of-the-art approaches based on AND-inverter graphs and commercial tools. In this paper, we present a new MIG optimization algorithm targeting size minimization based on functional hashing. The proposed algorithm makes use of minimum MIG representations which are precomputed for functions up to 4 variables using an approach based on Satisfiability Modulo Theories (SMT). Experimental results show that heavily-optimized MIGs can be further minimized also in size, thanks to our proposed methodology. When using the optimized MIGs as starting point for technology mapping, we were able to improve both depth and area for the arithmetic instances of the EPFL benchmarks beyond the current results achievable by state-of-the-art logic synthesis algorithms.",
"title": ""
},
{
"docid": "0e2311c6dd24a2efe51de10c3d5e8a01",
"text": "Continents, especially their Archean cores, are underlain by thick thermal boundary layers that have been largely isolated from the convecting mantle over billion-year timescales, far exceeding the life span of oceanic thermal boundary layers. This longevity is promoted by the fact that continents are underlain by highly melt-depleted peridotites, which result in a chemically distinct boundary layer that is intrinsically buoyant and strong (owing to dehydration). This chemical boundary layer counteracts the destabilizing effect of the cold thermal state of continents. The compositions of cratonic peridotites require formation at shallower depths than they currently reside, suggesting that the building blocks of continents formed in oceanic or arc environments and became “continental” after significant thickening or underthrusting. Continents are difficult to destroy, but refertilization and rehydration of continental mantle by the passage of melts can nullify the unique stabilizing composition of continents.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "ae3770d75796453f83329b676fa884ba",
"text": "This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S3FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchorbased detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-theart detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"title": ""
},
{
"docid": "f36b101aa059792e21281bff8157568f",
"text": "Many research projects oriented on control mechanisms of virtual agents in videogames have emerged in recent years. However, this boost has not been accompanied with the emergence of toolkits supporting development of these projects, slowing down the progress in the field. Here, we present Pogamut 3, an open source platform for rapid development of behaviour for virtual agents embodied in a 3D environment of the Unreal Tournament 2004 videogame. Pogamut 3 is designed to support research as well as educational projects. The paper also briefly touches extensions of Pogamut 3; the ACT-R integration, the emotional model ALMA integration, support for control of avatars at the level of gestures, and a toolkit for developing educational scenarios concerning orientation in urban areas. These extensions make Pogamut 3 applicable beyond the domain of computer games.",
"title": ""
},
{
"docid": "9c35b7e3bf0ef3f3117c6ba8a9ad1566",
"text": "Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data, asynchronous parallelization of SGD has also been studied. Then, a natural question is whether these techniques can be seamlessly integrated with each other, and whether the integration has desirable theoretical guarantee on its convergence. In this paper, we provide our formal answer to this question. In particular, we consider the asynchronous parallelization of SGD, accelerated by leveraging variance reduction, coordinate sampling, and Nesterov’s method. We call the new algorithm asynchronous accelerated SGD (AASGD). Theoretically, we proved a convergence rate of AASGD, which indicates that (i) the three acceleration methods are complementary to each other and can make their own contributions to the improvement of convergence rate; (ii) asynchronous parallelization does not hurt the convergence rate, and can achieve considerable speedup under appropriate parameter setting. Empirically, we tested AASGD on a few benchmark datasets. The experimental results verified our theoretical findings and indicated that AASGD could be a highly effective and efficient algorithm for practical use.",
"title": ""
},
{
"docid": "3c41bdaeaaa40481c8e68ad00426214d",
"text": "Image captioning is an important task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite mitigating the vanishing gradient problem, and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential across time. To address this issue, recent work has shown benefits of convolutional networks for machine translation and conditional image generation [9, 34, 35]. Inspired by their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having a faster training time per number of parameters. We also perform a detailed analysis, providing compelling reasons in favor of convolutional language generation approaches.",
"title": ""
},
{
"docid": "48b2d263a0f547c5c284c25a9e43828e",
"text": "This paper presents hierarchical topic models for integrating sentiment analysis with collaborative filtering. Our goal is to automatically predict future reviews to a given author from previous reviews. For this goal, we focus on differentiating author's preference, while previous sentiment analysis models process these review articles without this difference. Therefore, we propose a Latent Evaluation Topic model (LET) that infer each author's preference by introducing novel latent variables into author and his/her document layer. Because these variables distinguish the variety of words in each article by merging similar word distributions, LET incorporates the difference of writers' preferences into sentiment analysis. Consequently LET can determine the attitude of writers, and predict their reviews based on like-minded writers' reviews in the collaborative filtering approach. Experiments on review articles show that the proposed model can reduce the dimensionality of reviews to the low-dimensional set of these latent variables, and is a significant improvement over standard sentiment analysis models and collaborative filtering algorithms.",
"title": ""
},
{
"docid": "23677c0107696de3cc630f424484284a",
"text": "With the development of expressway, the vehicle path recognition based on RFID is designed and an Electronic Toll Collection system of expressway will be implemented. It uses a passive RFID tag as carrier to identify Actual vehicle path in loop road. The ETC system will toll collection without parking, also census traffic flow and audit road maintenance fees. It is necessary to improve expressway management.",
"title": ""
},
{
"docid": "db28e168e32c6a3907e092b9144d9033",
"text": "Trillions of microbes inhabit the human intestine, forming a complex ecological community that influences normal physiology and susceptibility to disease through its collective metabolic activities and host interactions. Understanding the factors that underlie changes in the composition and function of the gut microbiota will aid in the design of therapies that target it. This goal is formidable. The gut microbiota is immensely diverse, varies between individuals and can fluctuate over time — especially during disease and early development. Viewing the microbiota from an ecological perspective could provide insight into how to promote health by targeting this microbial community in clinical treatments.",
"title": ""
},
{
"docid": "9bc90b182e3acd0fd0cfa10a7abc32f8",
"text": "The advertising industry is seeking to use the unique data provided by the increasing usage of mobile devices and mobile applications (apps) to improve targeting and the experience with apps. As a consequence, understanding user behaviours with apps has gained increased interests from both academia and industry. In this paper we study user app engagement patterns and disruptions of those patterns in a data set unique in its scale and coverage of user activity. First, we provide a detailed account of temporal user activity patterns with apps and compare these to previous studies on app usage behavior. Then, in the second part, and the main contribution of this work, we take advantage of the scale and coverage of our sample and show how app usage behavior is disrupted through major political, social, and sports events.",
"title": ""
},
{
"docid": "f3002c5d152c8bf3d00473cbebdb6052",
"text": "I unstructured natural language — allow any statements, but make mistakes or failure. I controlled natural language — only allow unambiguous statements that can be interpreted (e.g., in supermarkets or for doctors). There is a vast amount of information in natural language. Understanding language to extract information or answering questions is more difficult than getting extracting gestalt properties such as topic, or choosing a help page. Many of the problems of AI are explicit in natural language understanding. “AI complete”.",
"title": ""
},
{
"docid": "5b34cc85e267f28c3eda238620f4646a",
"text": "An electrostatic chuck is one of the useful device holding a thin object flat on a bed by electrostatic force. The authors have investigated the fundamental characteristics of an electrostatic chuck consisted of a pair of comb type electrodes and a thin insulation layer between the electrodes and an object. When a thin polymer film is used as an insulation, the holding force for a wafer was large enough in practical use, while the large residual force remains after removing the DC applied voltage. Thus, it was concluded that AC applied voltage will be more preferable than DC, though the electrostatic force for DC applied voltage is somewhat greater than that for AC voltage. Since the electrostatic chuck is generally used in high temperature atmosphere, for example plasma etching, in the semiconductor industry, the insulating layer must be heat resistant. By using a thin ceramic plate, which was made specially for this purpose, the fundamental characteristics of the electrostatic chuck has been investigated. The greater holding force was obtained with a ceramic plate than that with a polymer film. Furthermore, almost no residual force was observed even for the DC applied voltage. The experimental results are reported both in air and in vacuum condition.",
"title": ""
},
{
"docid": "173f7f8a8fb38bbfd346513bfff1814e",
"text": "In this paper, we investigate a challenging task of automatic related work generation. Given multiple reference papers as input, the task aims to generate a related work section for a target paper. The generated related work section can be used as a draft for the author to complete his or her final related work section. We propose our Automatic Related Work Generation system called ARWG to address this task. It first exploits a PLSA model to split the sentence set of the given papers into different topic-biased parts, and then applies regression models to learn the importance of the sentences. At last it employs an optimization framework to generate the related work section. Our evaluation results on a test set of 150 target papers along with their reference papers show that our proposed ARWG system can generate related work sections with better quality. A user study is also performed to show ARWG can achieve an improvement over generic multi-document summarization baselines.",
"title": ""
},
{
"docid": "8f97a55eba1c9b3238b4c02d59dc8f52",
"text": "Despite the strong interests among practitioners, there is a knowledge gap with regard to online communities of practice. This study examines knowledge sharing among critical-care and advanced-practice nurses, who are engaged in a longstanding online community of practice. Data were collected about members’ online knowledge contribution as well as motivations for sharing or not sharing knowledge with others. In sum, 27 interviews with members and content analysis of approximately 400 messages were conducted. Data analysis showed that the most common types of knowledge shared were “Institutional Practice” and “Personal Opinion”. Five factors were found that helped motivate knowledge sharing: (a) self-selection type of membership, (b) desire to improve the nursing profession, (c) reciprocity, (d) a non-competitive environment, and (e) the role of the listserv moderator. Regarding barriers for knowledge sharing, four were found: (a) no new or additional knowledge to add, (b) unfamiliarity with subject, (c) lack of time, and (d) technology. These results will be informative to researchers and practitioners of online communities of practice.",
"title": ""
},
{
"docid": "f43aef1428a2c481fc97a25c17f4bdb4",
"text": "It is thought by cognitive scientists and typographers alike, that lower-case text is more legible than upper-case. Yet lower-case letters are, on average, smaller in height and width than upper-case characters, which suggests an upper-case advantage. Using a single unaltered font and all upper-, all lower-, and mixed-case text, we assessed size thresholds for words and random strings, and reading speeds for text with normal and visually impaired participants. Lower-case thresholds were roughly 0.1 log unit higher than upper. Reading speeds were higher for upper- than for mixed-case text at sizes twice acuity size; at larger sizes, the upper-case advantage disappeared. Results suggest that upper-case is more legible than the other case styles, especially for visually-impaired readers, because smaller letter sizes can be used than with the other case styles, with no diminution of legibility.",
"title": ""
},
{
"docid": "5eecfb516bfc30379a8b457c18770f26",
"text": "Machine learning as a service has been widely deployed to utilize deep neural network models to provide prediction services. However, this raises privacy concerns since clients need to send sensitive information to servers. In this paper, we focus on the scenario where clients want to classify private images with a convolutional neural network model hosted in the server, while both parties keep their data private. We present FALCON, a fast and secure approach for CNN predictions based on Fourier Transform. Our solution enables linear layers of a CNN model to be evaluated simply and efficiently with fully homomorphic encryption. We also introduce the first efficient and privacy-preserving protocol for softmax function, which is an indispensable component in CNNs and has not yet been evaluated in previous works due to its high complexity. We implemented the FALCON and evaluated the performance on real-world CNN models. The experimental results show that FALCON outperforms the best known works in both computation and communication cost.",
"title": ""
},
{
"docid": "619c896586e7907ba22529100418256e",
"text": "Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irrationality.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] |
scidocsrr
|
44d71ea8ccb4a40197165b137000aa05
|
Document-based Recommender System for Job Postings using Dense Representations
|
[
{
"docid": "63306a873be38cbd14cd3f4d9a21fd28",
"text": "We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generated as such captures the semantic meanings of the document during learning. A corruption model is included, which introduces a data-dependent regularization that favors informative or rare words while forcing the embeddings of common and non-discriminative ones to be close to zero. Doc2VecC produces significantly better word embeddings than Word2Vec. We compare Doc2VecC with several state-of-the-art document representation learning algorithms. The simple model architecture introduced by Doc2VecC matches or out-performs the state-of-the-art in generating high-quality document representations for sentiment analysis, document classification as well as semantic relatedness tasks. The simplicity of the model enables training on billions of words per hour on a single machine. At the same time, the model is very efficient in generating representations of unseen documents at test time.",
"title": ""
},
{
"docid": "544426cfa613a31ac903041afa946d89",
"text": "Recommender systems have the effect of guiding users in a personalized way to interesting objects in a large space of possible options. Content-based recommendation systems try to recommend items similar to those a given user has liked in the past. Indeed, the basic process performed by a content-based recommender consists in matching up the attributes of a user profile in which preferences and interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items. This chapter provides an overview of content-based recommender systems, with the aim of imposing a degree of order on the diversity of the different aspects involved in their design and implementation. The first part of the chapter presents the basic concepts and terminology of contentbased recommender systems, a high level architecture, and their main advantages and drawbacks. The second part of the chapter provides a review of the state of the art of systems adopted in several application domains, by thoroughly describing both classical and advanced techniques for representing items and user profiles. The most widely adopted techniques for learning user profiles are also presented. The last part of the chapter discusses trends and future research which might lead towards the next generation of systems, by describing the role of User Generated Content as a way for taking into account evolving vocabularies, and the challenge of feeding users with serendipitous recommendations, that is to say surprisingly interesting items that they might not have otherwise discovered. Pasquale Lops Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: lops@di.uniba.it Marco de Gemmis Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: degemmis@di.uniba.it Giovanni Semeraro Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: semeraro@di.uniba.it",
"title": ""
}
] |
[
{
"docid": "a63dfbdf30721a08bc7222a3883d7dc0",
"text": "Bier spots represent a benign vascular mottling characterized by multiple irregular white macules along the extensor surfaces of the arms and legs. They have been reported in a variety of diverse conditions with no consistent disease association. We have identified a novel association between these physiologic anemic macules and lower extremity lymphedema. Eleven patients between 23 and 70 years of age (5 male and 6 female) were diagnosed with Bier spots as evidenced by reversible white macules ranging from 3 to 8 mm in diameter on the extensor portions of the feet, ankles, and calves. The thighs were affected as well in 2 morbidly obese subjects. We suspect that these lesions are not uncommon in lymphedema but are simply under-recognized.",
"title": ""
},
{
"docid": "c55c339eb53de3a385df7d831cb4f24b",
"text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.",
"title": ""
},
{
"docid": "8a322a2d1ea98a7232c37797d2db2bfa",
"text": "The link between affect and student learning has been the subject of increasing attention in recent years. Affective states such as flow and curiosity tend to have positive correlations with learning while negative states such as boredom and frustration have the opposite effect. Student engagement and motivation have also been shown to be critical in improving learning gains with computer-based learning environments. Consequently, it is a design goal of many computer-based learning environments to encourage positive affect and engagement while students are learning. Game-based learning environments offer significant potential for increasing student engagement and motivation. However, it is unclear how affect and engagement interact with learning in game-based learning environments. This work presents an in-depth analysis of how these phenomena occur in the game-based learning environment, Crystal Island. The findings demonstrate that game-based learning environments can simultaneously support learning and promote positive affect and engagement.",
"title": ""
},
{
"docid": "cd3fbe507e685b3f62ebd5e5243ddb0b",
"text": "Changes in the background EEG activity occurring at the same time as visual and auditory evoked potentials, as well as during the interstimulus interval in a CNV paradigm were analysed in human subjects, using serial power measurements of overlapping EEG segments. The analysis was focused on the power of the rhythmic activity within the alpha band (RAAB power). A decrease in RAAB power occurring during these event-related phenomena was indicative of desynchronization. Phasic, i.e. short lasting, localised desynchronization was present during sensory stimulation, and also preceding the imperative signal and motor response (motor preactivation) in the CNV paradigm.",
"title": ""
},
{
"docid": "21504c2f05f0bae4cd93f75d564311dc",
"text": "A molecular test for Alzheimer's disease could lead to better treatment and therapies. We found 18 signaling proteins in blood plasma that can be used to classify blinded samples from Alzheimer's and control subjects with close to 90% accuracy and to identify patients who had mild cognitive impairment that progressed to Alzheimer's disease 2–6 years later. Biological analysis of the 18 proteins points to systemic dysregulation of hematopoiesis, immune responses, apoptosis and neuronal support in presymptomatic Alzheimer's disease.",
"title": ""
},
{
"docid": "9fe198a6184a549ff63364e9782593d8",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
},
{
"docid": "5f8b51a4e762928ab46a3ceca6f488e7",
"text": "Variable-flux permanent-magnet machines (VFPM) are of great interest and many different machine topologies have been documented. This paper categorizes VFPM machine topologies with regard to the method of flux variation and further, in the case of hybrid excited machines with field coils, with regard to the location of the excitation sources. The different VFPM machines are reviewed and compared in terms of their torque density, complexity and their ability to vary the flux.",
"title": ""
},
{
"docid": "d37316c9a63d506b7da4797de0e645e8",
"text": "Isomap is one of widely-used low-dimensional embedding methods, where geodesic distances on a weighted graph are incorporated with the classical scaling (metric multidimensional scaling). In this paper we pay our attention to two critical issues that were not considered in Isomap, such as: (1) generalization property (projection property); (2) topological stability. Then we present a robust kernel Isomap method, armed with such two properties. We present a method which relates the Isomap to Mercer kernel machines, so that the generalization property naturally emerges, through kernel principal component analysis. For topological stability, we investigate the network flow in a graph, providing a method for eliminating critical outliers. The useful behavior of the robust kernel Isomap is confirmed through numerical experiments with several data sets.",
"title": ""
},
{
"docid": "1c8f6d6c599d19f61c7e384d06ee6b09",
"text": "In this paper an approach for obtaining unique solutions to forward and inverse kinematics of a spherical parallel manipulator (SPM) system with revolute joints is proposed. Kinematic analysis of a general SPM with revolute joints is revisited and the proposed approach is formulated in the form of easy-to-follow algorithms that are described in detail. A graphical verification method using SPM computer-aided-design (CAD) models is presented together with numerical and experimental examples that confirm the correctness of the proposed approach. It is expected that this approach can be applied to SPMs with different geometries and can be useful in designing real-time control systems of SPMs.",
"title": ""
},
{
"docid": "a9baecb9470242c305942f7bc98494ab",
"text": "This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.",
"title": ""
},
{
"docid": "d7bd02def0f010016b53e2c41b42df35",
"text": "We utilise smart eyeglasses for dietary monitoring, in particular to sense food chewing. Our approach is based on a 3D-printed regular eyeglasses design that could accommodate processing electronics and Electromyography (EMG) electrodes. Electrode positioning was analysed and an optimal electrode placement at the temples was identified. We further compared gel and dry fabric electrodes. For the subsequent analysis, fabric electrodes were attached to the eyeglasses frame. The eyeglasses were used in a data recording study with eight participants eating different foods. Two chewing cycle detection methods and two food classification algorithms were compared. Detection rates for individual chewing cycles reached a precision and recall of 80%. For five foods, classification accuracy for individual chewing cycles varied between 43% and 71%. Majority voting across intake sequences improved accuracy, ranging between 63% and 84%. We concluded that EMG-based chewing analysis using smart eyeglasses can contribute essential chewing structure information to dietary monitoring systems, while the eyeglasses remain inconspicuous and thus could be continuously used.",
"title": ""
},
{
"docid": "fca805a46323a054d6cbe75fcff9deb3",
"text": "This study investigates the effectiveness of digital nudging for users’ social sharing of online platform content. In collaboration with a leading career and education online platform, we conducted a large-scale randomized experiment of digital nudging using website popups. Grounding on the Social Capital Theory and the individual motivation mechanism, we proposed and tested four kinds of nudging messages: simple request, monetary incentive, relational capital, and cognitive capital. We find that nudging messages with monetary incentive, relational and cognitive capital framings lead to increase in social sharing behavior, while nudging message with simple request decreases social sharing, comparing to the control group without nudging. This study contributes to the prior research on digital nudging by providing causal evidence of effective nudging for online social sharing behavior. The findings of this study also provide valuable guidelines for the optimal design of online platforms to effectively nudge/encourage social sharing in practice.",
"title": ""
},
{
"docid": "defde14c64f5eecda83cf2a59c896bc0",
"text": "Time series shapelets are discriminative subsequences and their similarity to a time series can be used for time series classification. Since the discovery of time series shapelets is costly in terms of time, the applicability on long or multivariate time series is difficult. In this work we propose Ultra-Fast Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast Shapelets yield the same prediction quality as current state-of-theart shapelet-based time series classifiers that carefully select the shapelets by being by up to three orders of magnitudes. Since this method allows a ultra-fast shapelet discovery, using shapelets for long multivariate time series classification becomes feasible. A method for using shapelets for multivariate time series is proposed and Ultra-Fast Shapelets is proven to be successful in comparison to state-of-the-art multivariate time series classifiers on 15 multivariate time series datasets from various domains. Finally, time series derivatives that have proven to be useful for other time series classifiers are investigated for the shapelet-based classifiers. It is shown that they have a positive impact and that they are easy to integrate with a simple preprocessing step, without the need of adapting the shapelet discovery algorithm.",
"title": ""
},
{
"docid": "8f41af1dec3cf3af11dff8237a497521",
"text": "This paper proposes a control strategy for a single-stage, three-phase, photovoltaic (PV) system that is connected to a distribution network. The control is based on an inner current-control loop and an outer DC-link voltage regulator. The current-control mechanism decouples the PV system dynamics from those of the network and the loads. The DC-link voltage-control scheme enables control and maximization of the real power output. Proper feedforward actions are proposed for the current-control loop to make its dynamics independent of those of the rest of the system. Further, a feedforward compensation mechanism is proposed for the DC-link voltage-control loop, to make the PV system dynamics immune to the PV array nonlinear characteristic. This, in turn, permits the design and optimization of the PV system controllers for a wide range of operating conditions. A modal/sensitivity analysis is also conducted on a linearized model of the overall system, to characterize dynamic properties of the system, to evaluate robustness of the controllers, and to identify the nature of interactions between the PV system and the network/loads. The results of the modal analysis confirm that under the proposed control strategy, dynamics of the PV system are decoupled from those of the distribution network and, therefore, the PV system does not destabilize the distribution network. It is also shown that the PV system dynamics are not influenced by those of the network (i.e., the PV system maintains its stability and dynamic properties despite major variations in the line length, line X/R ratio, load type, and load distance from the PV system).",
"title": ""
},
{
"docid": "ad33994b26dad74e6983c860c0986504",
"text": "Accurate software effort estimation has been a challenge for many software practitioners and project managers. Underestimation leads to disruption in the project's estimated cost and delivery. On the other hand, overestimation causes outbidding and financial losses in business. Many software estimation models exist; however, none have been proven to be the best in all situations. In this paper, a decision tree forest (DTF) model is compared to a traditional decision tree (DT) model, as well as a multiple linear regression model (MLR). The evaluation was conducted using ISBSG and Desharnais industrial datasets. Results show that the DTF model is competitive and can be used as an alternative in software effort prediction.",
"title": ""
},
{
"docid": "d6146614330de1da7ae1a4842e2768c1",
"text": "Series-connected power switch provides a viable solution to implement high voltage and high frequency converters. By using the commercially available 1200V Silicon Carbide (SiC) Junction Field Effect Transistor (JFET) and Metal Oxide semiconductor Filed-effect Transistor (MOSFET), a 6 kV SiC hybrid power switch concept and its application are demonstrated. To solve the parameter deviation issue in the series device structure, an optimized voltage control method is introduced, which can guarantee the equal voltage sharing under both static and dynamic state. Without Zener diode arrays, this strategy can significantly reduce the turn-off switching loss. Moreover, this hybrid MOSFET-JFETs concept is also presented to suppress the silicon MOSFET parasitic capacitance effect. In addition, the positive gate drive voltage greatly accelerates turn-on speed and decreases the switching loss. Compared with the conventional super-JFETs, the proposed scheme is suitable for series-connected device, and can achieve better performance. The effectiveness of this method is validated by simulations and experiments, and promising results are obtained.",
"title": ""
},
{
"docid": "6020b70701164e0a14b435153db1743e",
"text": "Supply chain Management has assumed a significant role in firm's performance and has attracted serious research attention over the last few years. In this paper attempt has been made to review the literature on Supply Chain Management. A literature review reveals a considerable spurt in research in theory and practice of SCM. We have presented a literature review for 29 research papers for the period between 2005 and 2011. The aim of this study was to provide an up-to-date and brief review of the SCM literature that was focused on broad areas of the SCM concept.",
"title": ""
},
{
"docid": "6d813684a21e3ccc7fb2e09c866be1f1",
"text": "Cross-site scripting (XSS) is a code injection attack that allows an attacker to execute malicious script in another user’s browser. Once the attacker gains control over the Website vulnerable to XSS attack, it can perform actions like cookie-stealing, malware-spreading, session-hijacking and malicious redirection. Malicious JavaScripts are the most conventional ways of performing XSS attacks. Although several approaches have been proposed, XSS is still a live problem since it is very easy to implement, but di cult to detect. In this paper, we propose an e↵ective approach for XSS attack detection. Our method focuses on balancing the load between client and the server. Our method performs an initial checking in the client side for vulnerability using divergence measure. If the suspicion level exceeds beyond a threshold value, then the request is discarded. Otherwise, it is forwarded to the proxy for further processing. In our approach we introduce an attribute clustering method supported by rank aggregation technique to detect confounded JavaScripts. The approach is validated using real life data.",
"title": ""
},
{
"docid": "e2fa36b9ac1c788d684399d070854fcd",
"text": "Since its initial introduction in the 1970s, the field of environmental justice (EJ) continues to grow, with significant contributions from the disciplines of sustainability science, geography, political science, public policy and administration, urban planning, law, and many others. Each of these disciplines approach EJ research from slightly different perspectives, but all offer unique and valuable insight to the EJ knowledge domain. Although the interdisciplinary nature of environmental justice should be viewed as a strength, it presents a challenge when attempting to both summarize and synthesize key contributions to the field, due to disciplinary bias, narrow subfield foci, or gaps in knowledge by a research team without a representative disciplinary composition. The purpose of this paper is to provide a succinct, panoptic review of key research contributions to environmental justice, while simultaneously minimizing common problems associated with traditional reviews. In particular, this paper explores the utility of co-citation network analysis, to provide insight into the most important subdomains of environmental justice research. The results suggest that while early EJ research is initially focused on environmental disamenities and a continued focus on race and inequality, the research gradually shifts to foci more concerned with environmental amenities, such as parks and greenspace. We also find that race and inequality remain an important and consist line of research over the duration of the study time period. Implications for environmental justice research and its allied subfields are discussed.",
"title": ""
},
{
"docid": "29c6b23a9e73b263f8f2949a3b7821c2",
"text": "The antifungal, antibacterial, and antioxidant activity of four commercial essential oils (EOs) (thyme, clove, rosemary, and tea tree) from Romanian production were studied in order to assess them as bioactive compounds for active food packaging applications. The chemical composition of the oils was determined with the Folin-Ciocâlteu method and gas chromatography coupled with mass spectrometry and flame ionization detectors, and it was found that they respect the AFNOR/ISO standard limits. The EOs were tested against three food spoilage fungi-Fusarium graminearum, Penicillium corylophilum, and Aspergillus brasiliensis-and three potential pathogenic food bacteria-Staphylococcus aureus, Escherichia coli, and Listeria monocytogenes-using the disc diffusion method. It was found that the EOs of thyme, clove, and tea tree can be used as antimicrobial agents against the tested fungi and bacteria, thyme having the highest inhibitory effect. Concerning antioxidant activity determined by 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2'-azino-bis 3-ethylbenzthiazoline-6-sulfonic acid (ABTS) methods, it has been established that the clove oil exhibits the highest activity because of its high phenolic content. Promising results were obtained by their incorporation into chitosan emulsions and films, which show potential for food packaging. Therefore, these essential oils could be suitable alternatives to chemical additives, satisfying the consumer demand for naturally preserved food products ensuring its safety.",
"title": ""
}
] |
scidocsrr
|
d2ef5bf0da06f1746d815789cb6ceed5
|
Effective Reinforcement Learning for Mobile Robots
|
[
{
"docid": "ab34db97483f2868697f7b0abab8daaa",
"text": "This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.",
"title": ""
},
{
"docid": "3486d3493a0deef5c3c029d909e3cdfc",
"text": "To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning. This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.",
"title": ""
},
{
"docid": "9dd92f8de1f0447461b5f4ec50f529f2",
"text": "This paper presents a method of vision-based reinforcement learning by which a robot learns to shoot a ball into a goal. We discuss several issues in applying the reinforcement learning method to a real robot with vision sensor by which the robot can obtain information about the changes in an environment. First, we construct a state space in terms of size, position, and orientation of a ball and a goal in an image, and an action space is designed in terms of the action commands to be sent to the left and right motors of a mobile robot. This causes a state-action deviation problem in constructing the state and action spaces that reflect the outputs from physical sensors and actuators, respectively. To deal with this issue, an action set is constructed in a way that one action consists of a series of the same action primitive which is successively executed until the current state changes. Next, to speed up the learning time, a mechanism of Learning from Easy Missions (or LEM) is implemented. LEM reduces the learning time from exponential to almost linear order in the size of the state space. The results of computer simulations and real robot experiments are given.",
"title": ""
},
{
"docid": "c0d7b92c1b88a2c234eac67c5677dc4d",
"text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization",
"title": ""
}
] |
[
{
"docid": "232d7e7986de374499c8ca580d055729",
"text": "In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.",
"title": ""
},
{
"docid": "e8f3dd4d2758da22d54114ec021b56dd",
"text": "Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget k, the rumor blocking problem asks for k seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of (1 − 1/e− δ) by a classic greedy algorithm combined with Monte Carlo simulation with the running time of O(k3 mn ln n/δ2), where n and m are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in O(km ln n/δ2) expected time and provides a (1 − 1/e − δ)-approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor.",
"title": ""
},
{
"docid": "88d1062b03e96c8c50c6ee8923cb32da",
"text": "On the one hand this paper presents a theoretical method to predict the responses for the parallel coupled microstrip bandpass filters, and on the other hand proposes a new MATLAB simulation interface including all parameters design procedure to predict the filter responses. The main advantage of this developed interface calculator is to enable researchers and engineers to design and determine easily all parameters of the PCMBPF responses with high accuracy and very small CPU time. To validate the numerical method and the corresponding new interface calculator, two PCMBP filters for wireless communications are designed and compared with the commercial electromagnetic CST simulator and the fabricated prototype respectively. Measured results show good agreement with those obtained by numerical method and simulations.",
"title": ""
},
{
"docid": "c5958b1ef21663b89e3823e9c33dc316",
"text": "The so-called “phishing” attacks are one of the important threats to individuals and corporations in today’s Internet. Combatting phishing is thus a top-priority, and has been the focus of much work, both on the academic and on the industry sides. In this paper, we look at this problem from a new angle. We have monitored a total of 19,066 phishing attacks over a period of ten months and found that over 90% of these attacks were actually replicas or variations of other attacks in the database. This provides several opportunities and insights for the fight against phishing: first, quickly and efficiently detecting replicas is a very effective prevention tool. We detail one such tool in this paper. Second, the widely held belief that phishing attacks are dealt with promptly is but an illusion. We have recorded numerous attacks that stay active throughout our observation period. This shows that the current prevention techniques are ineffective and need to be overhauled. We provide some suggestions in this direction. Third, our observation give a new perspective into the modus operandi of attackers. In particular, some of our observations suggest that a small group of attackers could be behind a large part of the current attacks. Taking down that group could potentially have a large impact on the phishing attacks observed today.",
"title": ""
},
{
"docid": "4b43203c83b46f0637d048c7016cce17",
"text": "Efficient detection of three dimensional (3D) objects in point clouds is a challenging problem. Performing 3D descriptor matching or 3D scanning-window search with detector are both time-consuming due to the 3-dimensional complexity. One solution is to project 3D point cloud into 2D images and thus transform the 3D detection problem into 2D space, but projection at multiple viewpoints and rotations produce a large amount of 2D detection tasks, which limit the performance and complexity of the 2D detection algorithm choice. We propose to use convolutional neural network (CNN) for the 2D detection task, because it can handle all viewpoints and rotations for the same class of object together, as well as predicting multiple classes of objects with the same network, without the need for individual detector for each object class. We further improve the detection efficiency by concatenating two extra levels of early rejection networks with binary outputs before the multi-class detection network. Experiments show that our method has competitive overall performance with at least one-order of magnitude speedup comparing with latest 3D point cloud detection methods.",
"title": ""
},
{
"docid": "a65484f4ed51533ff1713815b68ff3e4",
"text": "A monolithic 5-6-GHz band receiver, consisting of a differential preamplifier, dual doubly balanced mixers, cascaded injection-locked frequency doublers, and a quadrature local oscillator generator and prescaler, realizes over 45 dB of image-rejection in a mature 25-GHz silicon bipolar technology. The measured single sideband (50 /spl Omega/) noise figure is 5.1 dB with an IIP3 of -4.5 dBm and 17-dB conversion gain at 5.3 GHz. The 1.9/spl times/1.2 mm/sup 2/ IC is packaged in a standard 32-pin ceramic quad flatpack and consumes less than 50 mW from a 2.2-V supply.",
"title": ""
},
{
"docid": "5b8948b87d316b1b80db51f879b2cf8c",
"text": "This reprint is provided for personal and noncommercial use. For any other use, please send a request Brian Hayes by electronic mail to bhayes@amsci.org.",
"title": ""
},
{
"docid": "731238fd0ebd69368cbbce181faf479e",
"text": "In recent years, substantial progress has been made in the field of reverberant speech signal processing, including both singleand multichannel dereverberation techniques and automatic speech recognition (ASR) techniques that are robust to reverberation. In this paper, we describe the REVERB challenge, which is an evaluation campaign that was designed to evaluate such speech enhancement (SE) and ASR techniques to reveal the state-of-the-art techniques and obtain new insights regarding potential future research directions. Even though most existing benchmark tasks and challenges for distant speech processing focus on the noise robustness issue and sometimes only on a single-channel scenario, a particular novelty of the REVERB challenge is that it is carefully designed to test robustness against reverberation, based on both real, single-channel, andmultichannel recordings. This challenge attracted 27 papers, which represent 25 systems specifically designed for SE purposes and 49 systems specifically designed for ASR purposes. This paper describes the problems dealt within the challenge, provides an overview of the submitted systems, and scrutinizes them to clarify what current processing strategies appear effective in reverberant speech processing.",
"title": ""
},
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
},
{
"docid": "ba3522be00805402629b4fb4a2c21cc4",
"text": "Successful electronic government requires the successful implementation of technology. This book lays out a framework for understanding a system of decision processes that have been shown to be associated with the successful use of technology. Peter Weill and Jeanne Ross are based at the Center for Information Systems Research at MIT’s Sloan School of Management, which has been doing research on the management of information technology since 1974. Understanding how to make decisions about information technology has been a primary focus of the Center for decades. Weill and Ross’ book is based on two primary studies and a number of related projects. The more recent study is a survey of 256 organizations from the Americas, Europe, and Asia Pacific that was led by Peter Weill between 2001 and 2003. This work also included 32 case studies. The second study is a set of 40 case studies developed by Jeanne Ross between 1999 and 2003 that focused on the relationship between information technology (IT) architecture and business strategy. This work identified governance issues associated with IT and organizational change efforts. Three other projects undertaken by Weill, Ross, and others between 1998 and 2001 also contributed to the material described in the book. Most of this work is available through the CISR Web site, http://mitsloan.mit.edu/cisr/rmain.php. Taken together, these studies represent a substantial body of work on which to base the development of a frameBOOK REVIEW",
"title": ""
},
{
"docid": "5b0aed99831f22c6f0520c0d27982caf",
"text": "Learning the reward function of an agent by observing its behavior is termed inverse reinforcement learning and has applications in learning from demonstration or apprenticeship learning. We introduce the problem of multi-agent inverse reinforcement learning, where reward functions of multiple agents are learned by observing their uncoordinated behavior. A centralized controller then learns to coordinate their behavior by optimizing a weighted sum of reward functions of all the agents. We evaluate our approach on a traffic-routing domain, in which a controller coordinates actions of multiple traffic signals to regulate traffic density. We show that the learner is not only able to match but even significantly outperform the expert.",
"title": ""
},
{
"docid": "50d22974ef09d0f02ee05d345e434055",
"text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.",
"title": ""
},
{
"docid": "298d67edd4095672c69f14598ba12ab6",
"text": "Cryptocurrencies have emerged as important financial software systems. They rely on a secure distributed ledger data structure; mining is an integral part of such systems. Mining adds records of past transactions to the distributed ledger known as Blockchain, allowing users to reach secure, robust consensus for each transaction. Mining also introduces wealth in the form of new units of currency. Cryptocurrencies lack a central authority to mediate transactions because they were designed as peer-to-peer systems. They rely on miners to validate transactions. Cryptocurrencies require strong, secure mining algorithms. In this paper we survey and compare and contrast current mining techniques as used by major Cryptocurrencies. We evaluate the strengths, weaknesses, and possible threats to each mining strategy. Overall, a perspective on how Cryptocurrencies mine, where they have comparable performance and assurance, and where they have unique threats and strengths are outlined.",
"title": ""
},
{
"docid": "d6c8de6d16d320cff86d85f924ca6253",
"text": "Many variants of language models have been proposed for information retrieval. Most existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model. In this paper, we propose and study a new family of query generation models based on Poisson distribution. We show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent. However, based on different smoothing methods, the two families of models behave differently. We show that the Poisson model has several advantages, including naturally accommodating per-term smoothing and modeling accurate background more efficiently. We present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections. The results show that while their basic models perform comparably, the Poisson model can out perform multinomial model with per-term smoothing. The performance can be further improved with two-stage smoothing.",
"title": ""
},
{
"docid": "54c6e02234ce1c0f188dcd0d5ee4f04c",
"text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.",
"title": ""
},
{
"docid": "bcbc4ad2f0f6aec97a4711f305c90102",
"text": "OBJECTIVES\nTo determine the epidemiology of Candida bloodstream infections, variables influencing mortality, and antifungal resistance rates in ICUs in Spain.\n\n\nDESIGN\nProspective, observational, multicenter population-based study.\n\n\nSETTING\nMedical and surgical ICUs in 29 hospitals distributed throughout five metropolitan areas of Spain.\n\n\nPATIENTS\nAdult patients (≥ 18 yr) with an episode of Candida bloodstream infection during admission to any surveillance area ICU from May 2010 to April 2011.\n\n\nINTERVENTIONS\nCandida isolates were sent to a reference laboratory for species identification by DNA sequencing and susceptibility testing using the methods and breakpoint criteria promulgated by the European Committee on Antimicrobial Susceptibility Testing. Prognostic factors associated with early (0-7 d) and late (8-30 d) mortality were analyzed using logistic regression modeling.\n\n\nMEASUREMENTS AND MAIN RESULTS\nWe detected 773 cases of candidemia, 752 of which were included in the overall cohort. Among these, 168 (22.3%) occurred in adult ICU patients. The rank order of Candida isolates was as follows: Candida albicans (52%), Candida parapsilosis (23.7%), Candida glabrata (12.7%), Candida tropicalis (5.8%), Candida krusei (4%), and others (1.8%). Overall susceptibility to fluconazole was 79.2%. Cumulative mortality at 7 and 30 days after the first episode of candidemia was 16.5% and 47%, respectively. Multivariate analysis showed that early appropriate antifungal treatment and catheter removal (odds ratio, 0.27; 95% CI, 0.08-0.91), Acute Physiology and Chronic Health Evaluation II score (odds ratio, 1.11; 95% CI, 1.04-1.19), and abdominal source (odds ratio, 8.15; 95% CI, 1.75-37.93) were independently associated with early mortality. Determinants of late mortality were age (odds ratio, 1.04; 95% CI, 1.01-1.07), intubation (odds ratio, 7.24; 95% CI, 2.24-23.40), renal replacement therapy (odds ratio, 6.12; 95% CI, 2.24-16.73), and primary source (odds ratio, 2.51; 95% CI, 1.06-5.95).\n\n\nCONCLUSIONS\nCandidemia in ICU patients is caused by non-albicans species in 48% of cases, C. parapsilosis being the most common among these. Overall mortality remains high and mainly related with host factors. Prompt adequate antifungal treatment and catheter removal could be critical to decrease early mortality.",
"title": ""
},
{
"docid": "a457545baa59e39e6ef6d7e0d2f9c02e",
"text": "The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions. We also discuss the intuitively appealing Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. paradigm of re-weighting the labeled training sample according to the target unlabeled distribution and show that, somewhat counter intuitively, we show that paradigm cannot be trusted in the following sense. There are DA tasks that are indistinguishable as far as the training data goes but in which re-weighting leads to significant improvement in one task while causing dramatic deterioration of the learning success in the other.",
"title": ""
},
{
"docid": "e1fb80117a0925954b444360e227d680",
"text": "Maize is one of the most important food and feed crops in Asia, and is a source of income for several million farmers. Despite impressive progress made in the last few decades through conventional breeding in the “Asia-7” (China, India, Indonesia, Nepal, Philippines, Thailand, and Vietnam), average maize yields remain low and the demand is expected to increasingly exceed the production in the coming years. Molecular marker-assisted breeding is accelerating yield gains in USA and elsewhere, and offers tremendous potential for enhancing the productivity and value of Asian maize germplasm. We discuss the importance of such efforts in meeting the growing demand for maize in Asia, and provide examples of the recent use of molecular markers with respect to (i) DNA fingerprinting and genetic diversity analysis of maize germplasm (inbreds and landraces/OPVs), (ii) QTL analysis of important biotic and abiotic stresses, and (iii) marker-assisted selection (MAS) for maize improvement. We also highlight the constraints faced by research institutions wishing to adopt the available and emerging molecular technologies, and conclude that innovative models for resource-pooling and intellectual-property-respecting partnerships will be required for enhancing the level and scope of molecular marker-assisted breeding for maize improvement in Asia. Scientists must ensure that the tools of molecular marker-assisted breeding are focused on developing commercially viable cultivars, improved to ameliorate the most important constraints to maize production in Asia.",
"title": ""
},
{
"docid": "4e938aed527769ad65d85bba48151d21",
"text": "We provide a thorough description of all the artifacts that are generated by the messenger application Telegram on Android OS. We also provide interpretation of messages that are generated and how they relate to one another. Based on the results of digital forensics investigation and analysis in this paper, an analyst/investigator will be able to read, reconstruct and provide chronological explanations of messages which are generated by the user. Using three different smartphone device vendors and Android OS versions as the objects of our experiments, we conducted tests in a forensically sound manner.",
"title": ""
},
{
"docid": "ebca43d1e96ead6d708327d807b9e72f",
"text": "Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images. We introduce a new approach for solving weakly supervised semantic segmentation with deep Convolutional Neural Networks (CNNs). The method introduces a novel layer which applies simplex projection on the output of a neural network using area constraints of class objects. The proposed method is general and can be seamlessly integrated into any CNN architecture. Moreover, the projection layer allows strongly supervised models to be adapted to weakly supervised models effortlessly by substituting ground truth labels. Our experiments have shown that applying such an operation on the output of a CNN improves the accuracy of semantic segmentation in a weakly supervised setting with image-level labels.",
"title": ""
}
] |
scidocsrr
|
02782f30807502d23a4da3ce5b408660
|
Time Series Segmentation through Automatic Feature Learning
|
[
{
"docid": "d3b0957b31f47620c0fa8e65a1cc086a",
"text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.",
"title": ""
}
] |
[
{
"docid": "cbc7ade273c2a6b66b9a739f8ac17093",
"text": "A digital clock and data recovery (CDR) employing a time-dithered delta-sigma modulator (TDDSM) is presented. By enabling hybrid dithering of a sampling period as well as an output bit of the TDDSM, the proposed CDR enhances the resolution of digitally controlled oscillator, removes a low-pass filter in the integral path, and reduces jitter generation. Fabricated in a 65-nm CMOS process, the proposed CDR operates at 5-Gb/s data rate with BER <; 10-12 for PRBS 31. The CDR consumes 13.32 mW at 5 Gb/s and achieves 2.14 and 29.7 ps of a long-term rms and peak-to-peak jitter, respectively.",
"title": ""
},
{
"docid": "661d5db6f4a8a12b488d6f486ea5995e",
"text": "Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.",
"title": ""
},
{
"docid": "19571239c6930597c9cc3acbb0a8bcbc",
"text": "People with motor impairments often have difficulty entering text accurately when typing on a keyboard. Thes e users also may have trouble correcting errors. We introdu ce TrueKeys, a system that automatically corrects typing errors as they occur. TrueKeys utilizes a word freque ncy list and a model of the user’s keyboard layout to identi fy typing errors and choose appropriate corrections. We evalu ated TrueKeys with 9 motor-impaired and 9 able-bodied us ers who completed phrase-typing trials with correction enabled and disabled. Results show that using TrueKeys sign ifica tly reduced the number of uncorrected errors for bot h motor-impaired (2.09% vs. 3.44% errors) and able-bodied users (1.03% vs. 1.83% errors).",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "0a0ec569738b90f44b0c20870fe4dc2f",
"text": "Transactional memory provides a concurrency control mechanism that avoids many of the pitfalls of lock-based synchronization. Researchers have proposed several different implementations of transactional memory, broadly classified into software transactional memory (STM) and hardware transactional memory (HTM). Both approaches have their pros and cons: STMs provide rich and flexible transactional semantics on stock processors but incur significant overheads. HTMs, on the other hand, provide high performance but implement restricted semantics or add significant hardware complexity. This paper is the first to propose architectural support for accelerating transactions executed entirely in software. We propose instruction set architecture (ISA) extensions and novel hardware mechanisms that improve STM performance. We adapt a high-performance STM algorithm supporting rich transactional semantics to our ISA extensions (called hardware accelerated software transactional memory or HASTM). HASTM accelerates fully virtualized nested transactions, supports language integration, and provides both object-based and cache-line based conflict detection. We have implemented HASTM in an accurate multi-core IA32 simulator. Our simulation results show that (1) HASTM single-thread performance is comparable to a conventional HTM implementation; (2) HASTM scaling is comparable to a STM implementation; and (3) HASTM is resilient to spurious aborts and can scale better than HTM in a multi-core setting. Thus, HASTM provides the flexibility and rich semantics of STM, while giving the performance of HTM.",
"title": ""
},
{
"docid": "a6e252e796bbb397eaefefcf2462cef8",
"text": "Recommending new items to existing users has remained a challenging problem due to absence of user’s past preferences for these items. The user personalized non-collaborative methods based on item features can be used to address this item cold-start problem. These methods rely on similarities between the target item and user’s previous preferred items. While computing similarities based on item features, these methods overlook the interactions among the features of the items and consider them independently. Modeling interactions among features can be helpful as some features, when considered together, provide a stronger signal on the relevance of an item when compared to case where features are considered independently. To address this important issue, in this work we introduce the Feature-based factorized Bilinear Similarity Model (FBSM), which learns factorized bilinear similarity model for Top-n recommendation of new items, given the information about items preferred by users in past as well as the features of these items. We carry out extensive empirical evaluations on benchmark datasets, and we find that the proposed FBSM approach improves upon traditional non-collaborative methods in terms of recommendation performance. Moreover, the proposed approach also learns insightful interactions among item features from data, which lead to deep understanding on how these interactions contribute to personalized recommendation.",
"title": ""
},
{
"docid": "df836ce803ea8c4e9d624391bba0da9b",
"text": "scores at high levels of adaptive functioning will have poorer precision (i.e., higher SEMs) than those at low levels. SUMMARY. The Vineland–II was designed to be an easily used, standardized measure of key domains of adaptive behavior–Communication, Daily Living Skills, and Socialization–that play a prominent role in the diagnosis of mental retardation and other developmental disabilities. The instrument clearly meets its goals of ease of use, clear procedures for the calculation of both raw and scaled scores, and clear and comprehensive information regarding its reliability and validity. The Vineland–II deserves to be considered among the best measures of adaptive behavior currently available, and the use of this instrument for making high-stakes decisions regarding individuals is recommended. Purpose: \" Designed to assess the cognitive ability of adolescents and adults \" ; \" provides subtest and composite scores that represent intellectual functioning in specific cognitive domains, as well as a composite score that represents general intellectual ability. \" Population: Ages 16-0 to 90-11. DESCRIPTION. The Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) is the most recent version of the most frequently administered intelligence test for older adolescents and adults, which traces its roots back to the 1939 Wechsler-Bellevue Intelligence Scale (Wechsler, 1939). Consistent with Wechsler's definition of intelligence (i.e., \" global capacity \" ; Wechsler, 1939, p. 229) and all versions of his tests, the WAIS-IV seeks to measure general intelligence through the administration of numerous subtests, each of which is an indicator and estimate of intelligence. The WAIS-IV is a major and important revision of its predecessor and clinicians should appreciate many of the changes. DEVELOPMENT. In revising the WAIS-IV, several goals were noted in the technical and interpretive manual including updating theoretical foundations, increasing developmental appropriate-ness, increasing user-friendliness, enhancing clinical utility, and improving psychometric features. Object Assembly and Picture Arrangement subtests were dropped, thus reducing subtests with manipula-tive objects to one (Block Design), as were Digit Symbol-Incidental Learning and Digit Symbol-Copy. New subtests to the WAIS-IV are Visual Puzzles, Figure Weights, and Cancellation. Verbal IQ (VIQ) and Performance IQ (PIQ) are no longer provided as with the Wechsler Intelligence Scale for Children–Fourth Edition (WISC-IV; 16:262). Administration and scoring rules were modified, easier and more difficult items were added to subtests to improve coverage and range, and discontinue rules were reduced for many subtests. Stimulus materials were enlarged as was writing space for Coding. Item bias investigations were reportedly conducted but data …",
"title": ""
},
{
"docid": "086ae308f849990e927a510f00da1b98",
"text": "This demonstration paper presents TouchCORE, a multi-touch enabled software design modelling tool aimed at developing scalable and reusable software design models following the concerndriven software development paradigm. After a quick review of concern-orientation, this paper primarily focusses on the new features that were added to TouchCORE since the last demonstration at Modularity 2014 (were the tool was still called TouchRAM). TouchCORE now provides full support for concern-orientation. This includes support for feature model editing and different modes for feature model and impact model visualization and assessment to best assist the concern designers as well as the concern users. To help the modeller understand the interactions between concerns, TouchCORE now also collects tracing information when concerns are reused and stores that information with the woven models. This makes it possible to visualize from which concern(s) a model element in the woven model has originated.",
"title": ""
},
{
"docid": "e19e7510ce6d5f7517d0366e0771c999",
"text": "Human physical activity recognition from sensor data is a growing area of research due to the widespread adoption of sensor-rich wearable and smart devices. The growing interest resulted in several formulations with multiple proposals for each of them. This paper is interested in activity recognition from short sequences of sensor readings. Traditionally, solutions to this problem have relied on handcrafted features and feature selection from large predefined feature sets. More recently, deep methods have been employed to provide an end-to-end classification system for activity recognition with higher accuracy at the expense of much slower performance. This paper proposes a middle ground in which a deep neural architecture is employed for feature learning followed by traditional feature selection and classification. This approach is shown to outperform state-of-the-art systems on six out of seven experiments using publicly available datasets.",
"title": ""
},
{
"docid": "5b0e088e2bddd0535bc9d2dfbfeb0298",
"text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.",
"title": ""
},
{
"docid": "30e15e8a3e6eaf424b2f994d2631ac37",
"text": "This paper presents a volumetric stereo and silhouette fusion algorithm for acquiring high quality models from multiple calibrated photographs. Our method is based on computing and merging depth maps. Different from previous methods of this category, the silhouette information is also applied in our algorithm to recover the shape information on the textureless and occluded areas. The proposed algorithm starts by computing visual hull using a volumetric method in which a novel projection test method is proposed for visual hull octree construction. Then, the depth map of each image is estimated by an expansion-based approach that returns a 3D point cloud with outliers and redundant information. After generating an oriented point cloud from stereo by rejecting outlier, reducing scale, and estimating surface normal for the depth maps, another oriented point cloud from silhouette is added by carving the visual hull octree structure using the point cloud from stereo to restore the textureless and occluded surfaces. Finally, Poisson Surface Reconstruction approach is applied to convert the oriented point cloud both from stereo and silhouette into a complete and accurate triangulated mesh model. The proposed approach has been implemented and the performance of the approach is demonstrated on several real data sets, along with qualitative comparisons with the state-of-the-art image-based modeling techniques according to the Middlebury benchmark.",
"title": ""
},
{
"docid": "231be28aafe8f071cb156d6efed900d4",
"text": "The aim of this review was to investigate current evidence for the type and quality of exercise being offered to chronic low back pain (CLBP) patients, within randomised controlled trials (RCTs), and to assess how treatment outcomes are being measured. A two-fold methodological approach was adopted: a methodological assessment identified RCTs of 'medium' or 'high' methodological quality. Exercise quality was subsequently assessed according to the predominant exercise used. Outcome measures were analysed based on current recommendations. Fifty-four relevant RCTs were identified, of which 51 were scored for methodological quality. Sixteen RCTs involving 1730 patients qualified for inclusion in this review based upon their methodological quality, and chronicity of symptoms; exercise had a positive effect in all 16 trials. Twelve out of 16 programmes incorporated strengthening exercise, of which 10 maintained their positive results at follow-up. Supervision and adequate compliance were common aspects of trials. A wide variety of outcome measures were used. Outcome measures did not adequately represent the guidelines for impairment, activity and participation, and impairment measures were over-represented at the expense of others. Despite the variety offered, exercise has a positive effect on CLBP patients, and results are largely maintained at follow-up. Strengthening is a common component of exercise programmes, however, the role of exercise co-interventions must not be overlooked. More high quality trials are needed to accurately assess the role of supervision and follow-up, together with the use of more appropriate outcome measures.",
"title": ""
},
{
"docid": "fe5aeb8b41d7f6875ab00f6540d9d1e5",
"text": "Credit card fraud is an expensive problem for many financial institutions, costing billions of dollars to companies annually. Many adversaries still evade fraud detection systems because these systems often do not include information about the adversary's knowledge of the fraud detection mechanism. This project aims to include information about the “fraudster's” motivations and knowledge base into an adaptive fraud detection system. In this project, we use a game theoretical adversarial learning approach in order to model the fraudster's best strategy and pre-emptively adapt the fraud detection system to better classify these future fraudulent transactions. Using a logistic regression classifier as the fraud detection mechanism, we initially identify the best strategy for the adversary based on the number of fraudulent transactions that go undetected, and assume that the adversary uses this strategy for future transactions in order to improve our classifier. Prior research has used game theoretic models for adversarial learning in the domains of credit card fraud and email spam, but this project adds to the literature by extending these frameworks to a practical, real-world data set. Test results show that our adversarial framework produces an increasing AUC score on validation sets over several iterations in comparison to the static model usually employed by credit card companies.",
"title": ""
},
{
"docid": "f20bbbd06561f9cde0f1d538667635e2",
"text": "Artificial neural networks are finding many uses in the medical diagnosis application. The goal of this paper is to evaluate artificial neural network in disease diagnosis. Two cases are studied. The first one is acute nephritis disease; data is the disease symptoms. The second is the heart disease; data is on cardiac Single Proton Emission Computed Tomography (SPECT) images. Each patient classified into two categories: infected and non-infected. Classification is an important tool in medical diagnosis decision support. Feed-forward back propagation neural network is used as a classifier to distinguish between infected or non-infected person in both cases. The results of applying the artificial neural networks methodology to acute nephritis diagnosis based upon selected symptoms show abilities of the network to learn the patterns corresponding to symptoms of the person. In this study, the data were obtained from UCI machine learning repository in order to diagnosed diseases. The data is separated into inputs and targets. The targets for the neural network will be identified with 1's as infected and will be identified with 0's as non-infected. In the diagnosis of acute nephritis disease; the percent correctly classified in the simulation sample by the feed-forward back propagation network is 99 percent while in the diagnosis of heart disease; the percent correctly classified in the simulation sample by the feed-forward back propagation network is 95 percent.",
"title": ""
},
{
"docid": "d2c761a12fe02c07f06b04eaa1bd4fd7",
"text": "-Human faces provide a useful cue in indexing video content . In this paper, we present a highly efficient system that can rapidly detect human face regions in MPEG video sequences. The underlying algorithm takes the inverse quantized DCT coefficients of MPEG video as the input, and outputs the locations of the detected face regions. The algorithm consists of three stages, where chrominance, shape, and frequency information are used respectively. By detecting faces directly in the compressed domain, there is no need to carry out the inverse DCT transform, so that the algorithm can run faster than the real time. In our experiments, the algorithm detected 85-92% of the faces in three test sets, including both intra-frame and inter-frame coded image frames from news video. The average run time ranges from 13 to 33 milliseconds per frame. The algorithm can be applied to JPEG unconstrained images or motion JPEG video as well.",
"title": ""
},
{
"docid": "29f46a8f8275fe22cb1506c8ba4175a6",
"text": "Improving disaster management and recovery techniques is one of national priorities given the huge toll caused by man-made and nature calamities. Data-driven disaster management aims at applying advanced data collection and analysis technologies to achieve more effective and responsive disaster management, and has undergone considerable progress in the last decade. However, to the best of our knowledge, there is currently no work that both summarizes recent progress and suggests future directions for this emerging research area. To remedy this situation, we provide a systematic treatment of the recent developments in data-driven disaster management. Specifically, we first present a general overview of the requirements and system architectures of disaster management systems and then summarize state-of-the-art data-driven techniques that have been applied on improving situation awareness as well as in addressing users’ information needs in disaster management. We also discuss and categorize general data-mining and machine-learning techniques in disaster management. Finally, we recommend several research directions for further investigations.",
"title": ""
},
{
"docid": "a83c1f4a17f40d647a263e35f2cc7851",
"text": "Designers of human computation systms often face the need to aggregate noisy information provided by multiple people. While voting is often used for this purpose, the choice of voting method is typically not principled. We conduct extensive experiments on Amazon Mechanical Turk to better understand how different voting rules perform in practice. Our empirical conclusions show that noisy human voting can differ from what popular theoretical models would predict. Our short-term goal is to motivate the design of better human computation systems; our long-term goal is to spark an interaction between researchers in (computational) social choice and human computation.",
"title": ""
},
{
"docid": "0a62f7f2dd743b341679e4ac54e741c4",
"text": "Today, more and more companies in e-business have acknowledged that their business strategies should focus on identifying those customers who are likely to churn as markets become increasingly saturated. In this paper a new method was put forward to analyze and predict customer churn behavior based on data mining techniques, such as decision tree, clustering, neural network, etc. By collecting a large amount of customer churn questionnaire data, this paper established an e-business customer churn prediction and analysis model, analyzed the related factors which influence customer retention of e-business enterprises, and proposed many corresponding loss control measures.",
"title": ""
},
{
"docid": "339f7a0031680a2d930f143700d66d5e",
"text": "We propose an approach to generate natural language questions from knowledge graphs such as DBpedia and YAGO. We stage this in the setting of a quiz game. Our approach, though, is general enough to be applicable in other settings. Given a topic of interest (e.g., Soccer) and a difficulty (e.g., hard), our approach selects a query answer, generates a SPARQL query having the answer as its sole result, before verbalizing the question.",
"title": ""
},
{
"docid": "2093c7b23da9d4260efb3cd80414255f",
"text": "In the Internet of Things (IoT), resources' constrained tiny sensors and devices could be connected to unreliable and untrusted networks. Nevertheless, securing IoT technology is mandatory, due to the relevant data handled by these devices. Intrusion Detection System (IDS) is the most efficient technique to detect the attackers with a high accuracy when cryptography is broken. This is achieved by combining the advantages of anomaly and signature detection, which are high detection and low false positive rates, respectively. To achieve a high detection rate, the anomaly detection technique relies on a learning algorithm to model the normal behavior of a node and when a new attack pattern (often known as signature) is detected, it will be modeled with a set of rules. This latter is used by the signature detection technique for attack confirmation. However, the activation of anomaly detection for low-resource IoT devices could generate a high-energy consumption, specifically when this technique is activated all the time. Using game theory and with the help of Nash equilibrium, anomaly detection is activated only when a new attack's signature is expected to occur. This will make a balance between accuracy detection and energy consumption. Simulation results show that the proposed anomaly detection approach requires a low energy consumption to detect the attacks with high accuracy (i.e. high detection and low false positive rates).",
"title": ""
}
] |
scidocsrr
|
45ff0cf4531ac0d9c237d79004e66b1c
|
Determinants of Leadership Style in Big Five Personality Dimensions
|
[
{
"docid": "612e460c0f6e328d7516bfba7b674517",
"text": "There is universality in the transactional-transformational leadership paradigm. That is, the same conception of phenomena and relationships can be observed in a wide range of organizations and cultures. Exceptions can be understood as a consequence of unusual attributes of the organizations or cultures. Three corollaries are discussed. Supportive evidence has been gathered in studies conducted in organizations in business, education, the military, the government, and the independent sector. Likewise, supportive evidence has been accumulated from all but 1 continent to document the applicability of the paradigm.",
"title": ""
}
] |
[
{
"docid": "0f979712b19f19f84f36c838a036ed99",
"text": "In this paper we describe the development and deployment of a wireless sensor network (WSN) to monitor a train tunnel duri ng adjacent construction activity. The tunnel in question is a pa rt of the London Underground system. Construction of tunnels beneat h the existing tunnel is expected to cause deformations. The expe cted deformation values were determined by a detailed geotechnica l analysis. A real-time monitoring system, comprising of 18 sensin g u its and a base-station, was installed along the critical zone of the tunnel to measure the deformations. The sensing units report th ei data to the base-station at periodic intervals. The system was us ed for making continuous measurements for a period of 72 days. This window of time covered the period during which the tunnel bor ing machine (TBM) was active near the critical zone. The deploye d WSN provided accurate data for measuring the displacements and this is corroborated from the tunnel contractor’s data.",
"title": ""
},
{
"docid": "024cebc81fb851a74957e9b15130f9f6",
"text": "RATIONALE\nCardiac lipotoxicity, characterized by increased uptake, oxidation, and accumulation of lipid intermediates, contributes to cardiac dysfunction in obesity and diabetes mellitus. However, mechanisms linking lipid overload and mitochondrial dysfunction are incompletely understood.\n\n\nOBJECTIVE\nTo elucidate the mechanisms for mitochondrial adaptations to lipid overload in postnatal hearts in vivo.\n\n\nMETHODS AND RESULTS\nUsing a transgenic mouse model of cardiac lipotoxicity overexpressing ACSL1 (long-chain acyl-CoA synthetase 1) in cardiomyocytes, we show that modestly increased myocardial fatty acid uptake leads to mitochondrial structural remodeling with significant reduction in minimum diameter. This is associated with increased palmitoyl-carnitine oxidation and increased reactive oxygen species (ROS) generation in isolated mitochondria. Mitochondrial morphological changes and elevated ROS generation are also observed in palmitate-treated neonatal rat ventricular cardiomyocytes. Palmitate exposure to neonatal rat ventricular cardiomyocytes initially activates mitochondrial respiration, coupled with increased mitochondrial polarization and ATP synthesis. However, long-term exposure to palmitate (>8 hours) enhances ROS generation, which is accompanied by loss of the mitochondrial reticulum and a pattern suggesting increased mitochondrial fission. Mechanistically, lipid-induced changes in mitochondrial redox status increased mitochondrial fission by increased ubiquitination of AKAP121 (A-kinase anchor protein 121) leading to reduced phosphorylation of DRP1 (dynamin-related protein 1) at Ser637 and altered proteolytic processing of OPA1 (optic atrophy 1). Scavenging mitochondrial ROS restored mitochondrial morphology in vivo and in vitro.\n\n\nCONCLUSIONS\nOur results reveal a molecular mechanism by which lipid overload-induced mitochondrial ROS generation causes mitochondrial dysfunction by inducing post-translational modifications of mitochondrial proteins that regulate mitochondrial dynamics. These findings provide a novel mechanism for mitochondrial dysfunction in lipotoxic cardiomyopathy.",
"title": ""
},
{
"docid": "3e335d336d3c9bce4dbdf24402b8eb17",
"text": "Unlike traditional database management systems which are organized around a single data model, a multi-model database (MMDB) utilizes a single, integrated back-end to support multiple data models, such as document, graph, relational, and key-value. As more and more platforms are proposed to deal with multi-model data, it becomes crucial to establish a benchmark for evaluating the performance and usability of MMDBs. Previous benchmarks, however, are inadequate for such scenario because they lack a comprehensive consideration for multiple models of data. In this paper, we present a benchmark, called UniBench, with the goal of facilitating a holistic and rigorous evaluation of MMDBs. UniBench consists of a mixed data model, a synthetic multi-model data generator, and a set of core workloads. Specifically, the data model simulates an emerging application: Social Commerce, a Web-based application combining E-commerce and social media. The data generator provides diverse data format including JSON, XML, key-value, tabular, and graph. The workloads are comprised of a set of multi-model queries and transactions, aiming to cover essential aspects of multi-model data management. We implemented all workloads on ArangoDB and OrientDB to illustrate the feasibility of our proposed benchmarking system and show the learned lessons through the evaluation of these two multi-model databases. The source code and data of this benchmark can be downloaded at http://udbms.cs.helsinki.fi/bench/.",
"title": ""
},
{
"docid": "f07d44c814bdb87ffffc42ace8fd53a4",
"text": "We describe a batch method that uses a sizeable fraction of the training set at each iteration, and that employs secondorder information. • To improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. • This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS. • We show how to perform stable quasi-Newton updating in the multi-batch setting, illustrate the behavior of the algorithm in a distributed computing platform, and study its convergence properties for both the convex and nonconvex cases. Introduction min w∈Rd F (w) = 1 n n ∑ i=1 f (w;x, y) Idea: select a sizeable sample Sk ⊂ {1, . . . , n} at every iteration and perform quasi-Newton steps 1. Distributed computing setting: distributed gradient computation (with faults) 2. Multi-Batch setting: samples are changed at every iteration to accelerate learning Goal: show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization Issue: samples used at the beginning and at the end of every iteration are different • potentially harmful for quasi-Newton methods Key: controlled sampling • consecutive samples overlap Sk ∩ Sk+1 = Ok 6= ∅ • gradient differences based on this overlap – stable quasi-Newton updates Multi-Batch L-BFGS Method At the k-th iteration: • sample Sk ⊂ {1, . . . , n} chosen, and iterates updated via wk+1 = wk − αkHkgk k where gk k is the batch gradient g Sk k = 1 |Sk| ∑ i∈Sk ∇f ( wk;x , y ) and Hk is the inverse BFGS Hessian approximation Hk+1 =V T k HkVk + ρksks T k ρk = 1 yT k sk , Vk = 1− ρkyksk • to ensure consistent curvature pair updates sk+1 = wk+1 − wk, yk+1 = gk k+1 − g Ok k where gk k+1 and g Ok k are gradients based on the overlapping samples only Ok = Sk ∩ Sk+1 Sample selection:",
"title": ""
},
{
"docid": "6e88eef7f0e5d3cb19c6ef1f8c93d9e1",
"text": "PURPOSE\nThe purpose of this investigation was to evaluate the quality and strength of scientific evidence supporting an etiologic relationship between a disease and a proposed risk factor using a scoring system based on the Bradford Hill criteria for causal association.\n\n\nMETHODS\nA quantitative score based on the Bradford Hill criteria (qBHs) was used to evaluate 117 articles presenting original data regarding the etiology of carpal tunnel syndrome: 33 (28%) that evaluated biological (structural or genetic) risk factors, 51 (44%) that evaluated occupational (environment or activity-related) risk factors, and 33 (28%) that evaluated both types of risk factors.\n\n\nRESULTS\nThe quantitative Bradford Hill scores of 2 independent observers showed very good agreement, supporting the reliability of the instrument. The average qBHs was 12.2 points (moderate association) among biological risk factors compared with 5.2 points (poor association) for occupational risk factors. The highest average qBHs was observed for genetic factors (14.2), race (11.7), and anthropometric measures of the wrist (11.3 points) with all studies finding a moderate causal association. The highest average qBHs among occupational risk factors was observed for activities requiring repetitive hand use (6.5 points among the 30 of 45 articles that reported a causal association), substantial exposure to vibration (6.3 points; 14 of 20 articles), and type of occupation (5.6 points; 38 of 53 articles), with the findings being much less consistent.\n\n\nCONCLUSIONS\nAccording to a quantitative analysis of published scientific evidence, the etiology of carpal tunnel syndrome is largely structural, genetic, and biological, with environmental and occupational factors such as repetitive hand use playing a minor and more debatable role. Speculative causal theories should be analyzed through a rigorous approach prior to wide adoption.",
"title": ""
},
{
"docid": "68bc2abd13bcd19566eed66f0031c934",
"text": "As DRAM density keeps increasing, more rows need to be protected in a single refresh with the constant refresh number. Since no memory access is allowed during a refresh, the refresh penalty is no longer trivial and can result in significant performance degradation. To mitigate the refresh penalty, a Concurrent-REfresh-Aware Memory system (CREAM) is proposed in this work so that memory access and refresh can be served in parallel. The proposed CREAM architecture distinguishes itself with the following key contributions: (1) Under a given DRAM power budget, sub-rank-level refresh (SRLR) is developed to reduce refresh power and the saved power is used to enable concurrent memory access; (2) sub-array-level refresh (SALR) is also devised to effectively lower the probability of the conflict between memory access and refresh; (3) In addition, novel sub-array level refresh scheduling schemes, such as sub-array round-robin and dynamic scheduling, are designed to further improve the performance. A quasi-ROR interface protocol is proposed so that CREAM is fully compatible with JEDEC-DDR standard with negligible hardware overhead and no extra pin-out. The experimental results show that CREAM can improve the performance by 12.9% and 7.1% over the conventional DRAM and the Elastic-Refresh DRAM memory, respectively.",
"title": ""
},
{
"docid": "705b72cc6b535f1745d75fb945e5925e",
"text": "An increasing number of military systems are being developed using Service Oriented Architecture (SOA). Some of the features that make SOA appealing, like loose coupling, dynamism and composition-oriented system construction, make securing service-based systems more complicated. We have been developing Advanced Protected Services (APS) technologies for improving the resilience and survival of SOA services under cyber attack. These technologies introduce a layer to absorb, contain, and adapt to cyber attacks prior to the attacks reaching critical services. This paper describes an evaluation of these advanced protection technologies using a set of cooperative red team exercises. In these exercises, an independent red team launched attacks on a protected enclave in order to evaluate the efficacy and efficiency of the prototype protection technologies. The red team was provided full knowledge of the system under test and its protections, was given escalating levels of access to the system, and operated within agreed upon rules of engagement designed to scope the testing on useful evaluation results. We also describe the evaluation results and the use of cooperative red teaming as an effective means of evaluating cyber security.",
"title": ""
},
{
"docid": "7618fa5b704c892b6b122f3602893d75",
"text": "At the dawn of the second automotive century it is apparent that the competitive realm of the automotive industry is shifting away from traditional classifications based on firms’ production systems or geographical homes. Companies across the regional and volume spectrum have adopted a portfolio of manufacturing concepts derived from both mass and lean production paradigms, and the recent wave of consolidation means that regional comparisons can no longer be made without considering the complexities induced by the diverse ownership structure and plethora of international collaborations. In this chapter we review these dynamics and propose a double helix model illustrating how the basis of competition has shifted from cost-leadership during the heyday of Ford’s original mass production, to variety and choice following Sloan’s portfolio strategy, to diversification through leadership in design, technology or manufacturing excellence, as in the case of Toyota, and to mass customisation, which marks the current competitive frontier. We will explore how the production paradigms that have determined much of the competition in the first automotive century have evolved, what trends shape the industry today, and what it will take to succeed in the automotive industry of the future. 1 This chapter provides a summary of research conducted as part of the ILIPT Integrated Project and the MIT International Motor Vehicle Program (IMVP), and expands on earlier works, including the book The second century: reconnecting customer and value chain through build-toorder (Holweg and Pil 2004) and the paper Beyond mass and lean production: on the dynamics of competition in the automotive industry (Économies et Sociétés: Série K: Économie de l’Enterprise, 2005, 15:245–270).",
"title": ""
},
{
"docid": "0943667f7424875ea7a42dc7d0e422b4",
"text": "This paper introduces a novel concept of an air bearing test bench for CubeSat ground testing together with the corresponding dynamic parameter identification method. Contrary to existing air bearing test benches, the proposed concept allows three degree-of-freedom unlimited rotations and minimizes the influence of the test bench on the tested CubeSat. These advantages are made possible by the use of a robotic wrist which rotates air bearings in order to make them follow the CubeSat motion. Another keystone of the test bench is an accurate balancing of the tested CubeSat. Indeed, disturbing factors acting on the satellite shall be minimized, the most significant one being the gravity torque. An efficient balancing requires the CubeSat center of mass position to be accurately known. Usual techniques of dynamic parameter identification cannot be directly applied because of the frictionless suspension of the CubeSat in the test bench and, accordingly, due to the lack of external actuation. In this paper, a new identification method is proposed. This method does not require any external actuation and is based on the sampling of free oscillating motions of the CubeSat mounted on the test bench.",
"title": ""
},
{
"docid": "2ad34a7b1ed6591d683fe1450d1bd25f",
"text": "An extension of the Gauss-Newton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h o F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of the inclusion F ( x ) E C. This result extends a similar convergence result due to Womersley (this journal, 1985) which employs the assumption of a strongly unique solution of the composite function h o F. A backtracking line-search is proposed as a globalization strategy. For this algorithm, a global convergence result is established, with a quadratic rate under the regularity assumption.",
"title": ""
},
{
"docid": "867a6923a650bdb1d1ec4f04cda37713",
"text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.",
"title": ""
},
{
"docid": "9c0db9ac984a93d4a0019dd76e6ccdcf",
"text": "This paper presents a high power efficient broad-band programmable gain amplifier with multi-band switching. The proposed two stage common-emitter amplifier, by using the current reuse topology with a magnetically coupled transformer and a MOS varactor bank as a frequency tunable load, achieves a 55.9% peak power added efficiency (PAE), a peak saturated power of +11.1 dBm, a variable gain from 1.8 to 16 dB, and a tunable large signal 3-dB bandwidth from 24.3 to 35 GHz. The design is fabricated in a commercial 0.18- μm SiGe BiCMOS technology and measured with an output 1-dB gain compression point which is better than +9.6 dBm and a maximum dc power consumption of 22.5 mW from a single 1.8 V supply. The core amplifier, excluding the measurement pads, occupies a die area of 500 μm×450 μm.",
"title": ""
},
{
"docid": "607e66ac8c8fdc878f6f72d5c8695561",
"text": "We present evidence that the giant amoeboid organism, the true slime mold, constructs a network appropriate for maximizing nutrient uptake. The body of the plasmodium of Physarum polycephalum contains a network of tubular elements by means of which nutrients and chemical signals circulate through the organism. When food pellets were presented at different points on the plasmodium it accumulated at each pellet with a few tubes connecting the plasmodial concentrations. The geometry of the network depended on the positions of the food sources. Statistical analysis showed that the network geometry met the multiple requirements of a smart network: short total length of tubes, close connections among all the branches (a small number of transit food-sites between any two food-sites) and tolerance of accidental disconnection of the tubes. These findings indicate that the plasmodium can achieve a better solution to the problem of network configuration than is provided by the shortest connection of Steiner's minimum tree.",
"title": ""
},
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
},
{
"docid": "ca2258408035374cd4e7d1519e24e187",
"text": "In this paper we propose a novel application of Hidden Markov Models to automatic generation of informative headlines for English texts. We propose four decoding parameters to make the headlines appear more like Headlinese, the language of informative newspaper headlines. We also allow for morphological variation in words between headline and story English. Informal and formal evaluations indicate that our approach produces informative headlines, mimicking a Headlinese style generated by humans.",
"title": ""
},
{
"docid": "1e139fa9673f83ac619a5da53391b1ef",
"text": "In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.",
"title": ""
},
{
"docid": "88c21aaa6d3386f824583a37d32562e0",
"text": "Rising energy costs in large data centers are driving an agenda for energy-efficient computing. In this paper, we focus on the role of database software in affecting, and, ultimately, improving the energy efficiency of a server. We first characterize the power-use profiles of database operators under different configuration parameters. We find that common database operations can exercise the full dynamic power range of a server, and that the CPU power consumption of different operators, for the same CPU utilization, can differ by as much as 60%. We also find that for these operations CPU power does not vary linearly with CPU utilization.\n We then experiment with several classes of database systems and storage managers, varying parameters that span from different query plans to compression algorithms and from physical layout to CPU frequency and operating system scheduling. Contrary to what recent work has suggested, we find that within a single node intended for use in scale-out (shared-nothing) architectures, the most energy-efficient configuration is typically the highest performing one. We explain under which circumstances this is not the case, and argue that these circumstances do not warrant a retargeting of database system optimization goals. Further, our results reveal opportunities for cross-node energy optimizations and point out directions for new scale-out architectures.",
"title": ""
},
{
"docid": "74c86a2ff975d8298b356f0243e82ab0",
"text": "Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the justlearned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "e14be8d7d6889a59a7a85b0c025a612c",
"text": "The present study was designed to examine the effects of median nerve stimulation on motoneurones of remote muscles in healthy subjects using H-reflex, averaged EMG and PSTH methods. Stimulation of the median nerve induced facilitation of soleus H-reflex from about 50 ms and it reached a peak at about 100 ms of conditioning-test interval. Afferents that induced the facilitation consisted of at least two types of fibres, the high-threshold cutaneous fibres and the low-threshold fibres. When the effects were examined by the averaged surface EMG and PSTH, no facilitation but rather inhibition or inhibition-facilitation was induced in all tested muscles except for the upper limb muscles on the stimulated side. The inhibition latency was shortest in masseter muscle and longest in leg muscles, while values for the contralateral upper limb muscles were in the middle, indicating that the onset of inhibition was delayed from rostral to caudal muscles. Inputs from the median nerve converged to inhibitory interneurones, which mediate the masseter inhibitory reflex. Our findings suggested that inputs from the median nerve initially ascend to the brain, at least to the brainstem, and then descend to the spinal cord. Therefore, inhibition induced by median nerve stimulation was not considered as an interlimb reflex mediated by a propriospinal pathway, but long-loop reflex, at least via the pons. The discrepancy between the results of reflex and motor units suggests that facilitation of soleus H-reflex following median nerve stimulation was mainly due to reduced presynaptic inhibition.",
"title": ""
},
{
"docid": "1c3cf3ccdb3b7129c330499ca909b193",
"text": "Procedural methods for animating turbulent fluid are often preferred over simulation, both for speed and for the degree of animator control. We offer an extremely simple approach to efficiently generating turbulent velocity fields based on Perlin noise, with a formula that is exactly incompressible (necessary for the characteristic look of everyday fluids), exactly respects solid boundaries (not allowing fluid to flow through arbitrarily-specified surfaces), and whose amplitude can be modulated in space as desired. In addition, we demonstrate how to combine this with procedural primitives for flow around moving rigid objects, vortices, etc.",
"title": ""
}
] |
scidocsrr
|
ba5e2f794a71d0db1f3a2a106a64ad2e
|
Max-Cosine Matching Based Neural Models for Recognizing Textual Entailment
|
[
{
"docid": "4cf6a69833d7e553f0818aa72c99c938",
"text": "Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q/A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q/A system, accuracy can be increased by as much as 20% overall.",
"title": ""
},
{
"docid": "c117da74c302d9e108970854d79e54fd",
"text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.",
"title": ""
},
{
"docid": "c0ee7fef7f96db6908f49170c6c75b2c",
"text": "Improving Neural Networks with Dropout Nitish Srivastava Master of Science Graduate Department of Computer Science University of Toronto 2013 Deep neural nets with a huge number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from a neural network during training. This prevents the units from co-adapting too much. Dropping units creates thinned networks during training. The number of possible thinned networks is exponential in the number of units in the network. At test time all possible thinned networks are combined using an approximate model averaging procedure. Dropout training followed by this approximate model combination significantly reduces overfitting and gives major improvements over other regularization methods. In this work, we describe models that improve the performance of neural networks using dropout, often obtaining state-of-the-art results on benchmark datasets.",
"title": ""
}
] |
[
{
"docid": "ce3d82fc815a965a66be18d20434e80f",
"text": "In this paper the three-phase grid connected inverter has been investigated. The inverter’s control strategy is based on the adaptive hysteresis current controller. Inverter connects the DG (distributed generation) source to the grid. The main advantages of this method are constant switching frequency, better current control, easy filter design and less THD (total harmonic distortion). Since a constant and ripple free dc bus voltage is not ensured at the output of alternate energy sources, the main aim of the proposed algorithm is to make the output of the inverter immune to the fluctuations in the dc input voltage This inverter can be used to connect the medium and small-scale wind turbines and solar cells to the grid and compensate local load reactive power. Reactive power compensating improves SUF (system usage factor) from nearly 20% (in photovoltaic systems) to 100%. The simulation results confirm that switching frequency is constant and THD of injected current is low.",
"title": ""
},
{
"docid": "7b3da375a856a53b8a303438d015dffe",
"text": "Social Networking Sites (SNS), such as Facebook and LinkedIn, have become the established place for keeping contact with old friends and meeting new acquaintances. As a result, a user leaves a big trail of personal information about him and his friends on the SNS, sometimes even without being aware of it. This information can lead to privacy drifts such as damaging his reputation and credibility, security risks (for instance identity theft) and profiling risks. In this paper, we first highlight some privacy issues raised by the growing development of SNS and identify clearly three privacy risks. While it may seem a priori that privacy and SNS are two antagonist concepts, we also identified some privacy criteria that SNS could fulfill in order to be more respectful of the privacy of their users. Finally, we introduce the concept of a Privacy-enhanced Social Networking Site (PSNS) and we describe Privacy Watch, our first implementation of a PSNS.",
"title": ""
},
{
"docid": "3e9f54363d930c703dfe20941b2568b0",
"text": "Organizations are looking to new graduate nurses to fill expected staffing shortages over the next decade. Creative and effective onboarding programs will determine the success or failure of these graduates as they transition from student to professional nurse. This longitudinal quantitative study with repeated measures used the Casey-Fink Graduate Nurse Experience Survey to investigate the effects of offering a prelicensure extern program and postlicensure residency program on new graduate nurses and organizational outcomes versus a residency program alone. Compared with the nurse residency program alone, the combination of extern program and nurse residency program improved neither the transition factors most important to new nurse graduates during their first year of practice nor a measure important to organizations, retention rates. The additional cost of providing an extern program should be closely evaluated when making financially responsible decisions.",
"title": ""
},
{
"docid": "dc7baa331c6f96e8592c83c018d7b78c",
"text": "development process is a very complex process that, at present, is primarily a human activity. Programming, in software development, requires the use of different types of knowledge: about the problem domain and the programming domain. It also requires many different steps in combining these types of knowledge into one final solution. This paper intends to study the techniques developed in artificial intelligence (AI) from the standpoint of their application in software engineering. In particular, it focuses on techniques developed (or that are being developed) in artificial intelligence that can be deployed in solving problems associated with software engineering processes. This paper highlights a comparative study between the software development and expert system development. This paper also highlights absence of risk management strategies or risk management phase in AI based systems.",
"title": ""
},
{
"docid": "9c9c031767526777ee680f184de4b092",
"text": "The study of interleukin-23 (IL-23) over the past 8 years has led to the realization that cellular immunity is far more complex than previously appreciated, because it is controlled by additional newly identified players. From the analysis of seemingly straightforward cytokine regulation of autoimmune diseases, many limitations of the established paradigms emerged that required reevaluation of the 'rules' that govern the initiation and maintenance of immune responses. This information led to a major revision of the T-helper 1 (Th1)/Th2 hypothesis and discovery of an unexpected link between transforming growth factor-beta-dependent Th17 and inducible regulatory T cells. The aim of this review is to explore the multiple characteristics of IL-23 with respect to its 'id' in autoimmunity, 'ego' in T-cell help, and 'superego' in defense against mucosal pathogens.",
"title": ""
},
{
"docid": "64ba4467dc4495c6828f2322e8f415f2",
"text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.",
"title": ""
},
{
"docid": "f47ac42751d908bd6ef1a290715aa178",
"text": "System identification has become the fundamental pillar of the industry of the 21 century, since allows the integration of advanced model based control strategies with complex process; for diverse reasons, real process are in closed loop due to difference reasons making virtually impossible obtain a model using open loop techniques, in this work an innovative strategy to identify plants in closed loop using an indirect method based on Youla Kucera parameterization and neural nets is presented, stating 4 identifications subproblems with numerous potential applications on real problems",
"title": ""
},
{
"docid": "59574eb62f7c1473abaa564e022a45ee",
"text": "As deep learning (DL) is being rapidly pushed to edge computing, researchers invented various ways to make inference computation more efficient on mobile/IoT devices, such as network pruning, parameter compression, and etc. Quantization, as one of the key approaches, can effectively offload GPU, and make it possible to deploy DL on fixed-point pipeline. Unfortunately, not all existing networks design are friendly to quantization. For example, the popular lightweight MobileNetV1, while it successfully reduces parameter size and computation latency with separable convolution, our experiment shows its quantized models have large performance gap against its float point models. To resolve this, we analyzed the root cause of quantization loss and proposed a quantization-friendly separable convolution architecture. By evaluating the image classification task on ImageNet2012 dataset, our modified MobileNetV1 model can archive 8-bit inference top-1 accuracy in 68.03%, almost closed the gap to the float pipeline.",
"title": ""
},
{
"docid": "ae991359d6e76d0038de5a65f8218732",
"text": "Spatial data mining is the process of discovering interesting and previously unknown, but potentially useful patterns from the spatial and spatiotemporal data. However, explosive growth in the spatial and spatiotemporal data, and the emergence of social media and location sensing technologies emphasize the need for developing new and computationally efficient methods tailored for analyzing big data. In this paper, we review major spatial data mining algorithms by closely looking at the computational and I/O requirements and allude to few applications dealing with big spatial data.",
"title": ""
},
{
"docid": "a1cff98eecf6691777bb89e849645077",
"text": "Information-centric networking(ICN) opens new opportunities in the IoT domain due to its in-network caching capability. This significantly reduces read latency and the load on the origin server. In-network caching in ICN however introduces its own set of challenges because of its ubiquitous caches. Maintaining cache consistency without incurring high overhead is an important problem that needs to be handled in ICN to prevent a client from retrieving stale data. We propose a cache consistency approach based on the rate at which an IoT application generates its data. Our technique is lightweight and can be deployed easily in real network. Our simulation results demonstrate that our proposed algorithm significantly reduces the network traffic as well as the load on the origin server while serving fresh content to the clients.",
"title": ""
},
{
"docid": "711778acf9017cbb5ee3d314e64f9aec",
"text": "This paper presents a novel approach to the fabrication of a soft robotic hand with contact feedback for grasping delicate objects. Each finger has a multilayered structure, consisting of a main structure and sensing elements. The main structure includes a softer layer much thicker than a stiffer layer. The gripping energy of the fingers is generated from the elastic energy of the prestretched softer layers, and controlled by simple tendon strings pulled/released by a single actuation. Due to the prestretching and the difference in moduli among layers, the shape/posture of the fingers in stable state is similar to that of soft fingers actuated by pressurization. Then, we were able to design a soft-fingered hand for different applications by changing the morphological shape of layers. In addition, the hand includes a soft, flexible, and stretchable sensing element for the detection of contact and applied force, which can also be used in other designs of soft fingers. We assessed the ability of the proposed soft hand to grasp food products with feedback of contact location. The experimental results showed that the proposed hand could safely grasp light and delicate objects, such as fruits, and that it could possibly distinguish among objects based on feedback from sensors. The design proposed in this paper may give rise to other soft hand designs, along with the possibility of using morphological imbalanced deformation of multilayered structures in soft robotics research.",
"title": ""
},
{
"docid": "0abbf8df158969484bcb95579af7be6a",
"text": "Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.",
"title": ""
},
{
"docid": "d8cb31c41a2e1ff3f3d43367aa165680",
"text": "This article reviews the evidence for rhythmic categorization that has emerged on the basis of rhythm metrics, and argues that the metrics are unreliable predictors of rhythm which provide no more than a crude measure of timing. It is further argued that timing is distinct from rhythm and that equating them has led to circularity and a psychologically questionable conceptualization of rhythm in speech. It is thus proposed that research on rhythm be based on the same principles for all languages, something that does not apply to the widely accepted division of languages into stress- and syllable-timed. The hypothesis is advanced that these universal principles are grouping and prominence and evidence to support it is provided.",
"title": ""
},
{
"docid": "0f5511aaed3d6627671a5e9f68df422a",
"text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.",
"title": ""
},
{
"docid": "4ddb634e5fe781889f488fdbe86ec2a4",
"text": "How Much of the Corporate-Treasury Yield Spread Is Due to Credit Risk? No consensus has yet emerged from the existing credit risk literature on how much of the observed corporate-Treasury yield spreads can be explained by credit risk. In this paper, we propose a new calibration approach based on historical default data and show that one can indeed obtain consistent estimate of the credit spread across many different economic considerations within the structural framework of credit risk valuation. We find that credit risk accounts for only a small fraction of the observed corporate-Treasury yield spreads for investment grade bonds of all maturities, with the fraction smaller for bonds of shorter maturities; and that it accounts for a much higher fraction of yield spreads for junk bonds. We obtain these results by calibrating each of the models – both existing and new ones – to be consistent with data on historical default loss experience. Different structural models, which in theory can still generate a very large range of credit spreads, are shown to predict fairly similar credit spreads under empirically reasonable parameter choices, resulting in the robustness of our conclusion.",
"title": ""
},
{
"docid": "5f6f0bd98fa03e4434fabe18642a48bc",
"text": "Previous research suggests that women's genital arousal is an automatic response to sexual stimuli, whereas men's genital arousal is dependent upon stimulus features specific to their sexual interests. In this study, we tested the hypothesis that a nonhuman sexual stimulus would elicit a genital response in women but not in men. Eighteen heterosexual women and 18 heterosexual men viewed seven sexual film stimuli, six human films and one nonhuman primate film, while measurements of genital and subjective sexual arousal were recorded. Women showed small increases in genital arousal to the nonhuman stimulus and large increases in genital arousal to both human male and female stimuli. Men did not show any genital arousal to the nonhuman stimulus and demonstrated a category-specific pattern of arousal to the human stimuli that corresponded to their stated sexual orientation. These results suggest that stimulus features necessary to evoke genital arousal are much less specific in women than in men.",
"title": ""
},
{
"docid": "29097a62fcfa349cdd9be06e86098014",
"text": "Metaphor is a pervasive feature of human language that enables us to conceptualize and communicate abstract concepts using more concrete terminology. Unfortunately, it is also a feature that serves to confound a computer’s ability to comprehend natural human language. We present a method to detect linguistic metaphors by inducing a domainaware semantic signature for a given text and compare this signature against a large index of known metaphors. By training a suite of binary classifiers using the results of several semantic signature-based rankings of the index, we are able to detect linguistic metaphors in unstructured text at a significantly higher precision as compared to several baseline approaches.",
"title": ""
},
{
"docid": "97a817932c3fc43906cfd451ac8964da",
"text": "Data science and machine learning are the key technologies when it comes to the processes and products with automatic learning and optimization to be used in the automotive industry of the future. This article defines the terms “data science” (also referred to as “data analytics”) and “machine learning” and how they are related. In addition, it defines the term “optimizing analytics“ and illustrates the role of automatic optimization as a key technology in combination with data analytics. It also uses examples to explain the way that these technologies are currently being used in the automotive industry on the basis of the major subprocesses in the automotive value chain (development, procurement; logistics, production, marketing, sales and after-sales, connected customer). Since the industry is just starting to explore the broad range of potential uses for these technologies, visionary application examples are used to illustrate the revolutionary possibilities that they offer. Finally, the article demonstrates how these technologies can make the automotive industry more efficient and enhance its customer focus throughout all its operations and activities, extending from the product and its development process to the customers and their connection to the product.",
"title": ""
},
{
"docid": "45ec93ccf4b2f6a6b579a4537ca73e9c",
"text": "Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.",
"title": ""
}
] |
scidocsrr
|
86fd6d14bd128affddf6af16f906ac06
|
MaMaDroid: Detecting Android Malware by Building Markov Chains of Behavioral Models
|
[
{
"docid": "e8b5fcac441c46e46b67ffbdd4b043e6",
"text": "We present DroidSafe, a static information flow analysis tool that reports potential leaks of sensitive information in Android applications. DroidSafe combines a comprehensive, accurate, and precise model of the Android runtime with static analysis design decisions that enable the DroidSafe analyses to scale to analyze this model. This combination is enabled by accurate analysis stubs, a technique that enables the effective analysis of code whose complete semantics lies outside the scope of Java, and by a combination of analyses that together can statically resolve communication targets identified by dynamically constructed values such as strings and class designators. Our experimental results demonstrate that 1) DroidSafe achieves unprecedented precision and accuracy for Android information flow analysis (as measured on a standard previously published set of benchmark applications) and 2) DroidSafe detects all malicious information flow leaks inserted into 24 real-world Android applications by three independent, hostile Red-Team organizations. The previous state-of-the art analysis, in contrast, detects less than 10% of these malicious flows.",
"title": ""
},
{
"docid": "4a85e3b10ecc4c190c45d0dfafafb388",
"text": "The number of malicious applications targeting the Android system has literally exploded in recent years. While the security community, well aware of this fact, has proposed several methods for detection of Android malware, most of these are based on permission and API usage or the identification of expert features. Unfortunately, many of these approaches are susceptible to instruction level obfuscation techniques. Previous research on classic desktop malware has shown that some high level characteristics of the code, such as function call graphs, can be used to find similarities between samples while being more robust against certain obfuscation strategies. However, the identification of similarities in graphs is a non-trivial problem whose complexity hinders the use of these features for malware detection. In this paper, we explore how recent developments in machine learning classification of graphs can be efficiently applied to this problem. We propose a method for malware detection based on efficient embeddings of function call graphs with an explicit feature map inspired by a linear-time graph kernel. In an evaluation with 12,158 malware samples our method, purely based on structural features, outperforms several related approaches and detects 89% of the malware with few false alarms, while also allowing to pin-point malicious code structures within Android applications.",
"title": ""
}
] |
[
{
"docid": "81476f837dd763301ba065ac78c5bb65",
"text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ee223b75a3a99f15941e4725d261355e",
"text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.",
"title": ""
},
{
"docid": "f0432af5265a08ccde0111d2d05b93e2",
"text": "Cyber security is a critical issue now a days in various different domains in different disciplines. This paper presents a review analysis of cyber hacking attacks along with its experimental results and proposes a new methodology 3SEMCS named as three step encryption method for cyber security. By utilizing this new designed methodology, security at highest level will be easily provided especially on the time of request submission in the search engine as like google during client server communication. During its working a group of separate encryption algorithms are used. The benefit to utilize this three step encryption is to provide more tighten security by applying three separate encryption algorithms in each phase having different operations. And the additional benefit to utilize this methodology is to run over new designed private browser named as “RR” that is termed as Rim Rocks correspondingly this also help to check the authenticated sites or phishing sites by utilizing the strategy of passing URL address from phishing tank. This may help to block the phisher sites and user will relocate on previous page. The purpose to design this personnel browser is to enhance the level of security by",
"title": ""
},
{
"docid": "93d4f159eb718b6e8d2b5cb252f7bb6c",
"text": "We present RRT, the first asymptotically optimal samplingbased motion planning algorithm for real-time navigation in dynamic environments (containing obstacles that unpredictably appear, disappear, and move). Whenever obstacle changes are observed, e.g., by onboard sensors, a graph rewiring cascade quickly updates the search-graph and repairs its shortest-path-to-goal subtree. Both graph and tree are built directly in the robot’s state space, respect the kinematics of the robot, and continue to improve during navigation. RRT is also competitive in static environments—where it has the same amortized per iteration runtime as RRT and RRT* Θ (logn) and is faster than RRT ω ( log n ) . In order to achieve O (logn) iteration time, each node maintains a set of O (logn) expected neighbors, and the search graph maintains -consistency for a predefined .",
"title": ""
},
{
"docid": "d5d2b61493ed11ee74d566b7713b57ba",
"text": "BACKGROUND\nSymptomatic breakthrough in proton pump inhibitor (PPI)-treated gastro-oesophageal reflux disease (GERD) patients is a common problem with a range of underlying causes. The nonsystemic, raft-forming action of alginates may help resolve symptoms.\n\n\nAIM\nTo assess alginate-antacid (Gaviscon Double Action, RB, Slough, UK) as add-on therapy to once-daily PPI for suppression of breakthrough reflux symptoms.\n\n\nMETHODS\nIn two randomised, double-blind studies (exploratory, n=52; confirmatory, n=262), patients taking standard-dose PPI who had breakthrough symptoms, assessed by Heartburn Reflux Dyspepsia Questionnaire (HRDQ), were randomised to add-on Gaviscon or placebo (20 mL after meals and bedtime). The exploratory study endpoint was change in HRDQ score during treatment vs run-in. The confirmatory study endpoint was \"response\" defined as ≥3 days reduction in the number of \"bad\" days (HRDQ [heartburn/regurgitation] >0.70) during treatment vs run-in.\n\n\nRESULTS\nIn the exploratory study, significantly greater reductions in HRDQ scores (heartburn/regurgitation) were observed in the Gaviscon vs placebo (least squares mean difference [95% CI] -2.10 [-3.71 to -0.48]; P=.012). Post hoc \"responder\" analysis of the exploratory study also revealed significantly more Gaviscon patients (75%) achieved ≥3 days reduction in \"bad\" days vs placebo patients (36%), P=.005. In the confirmatory study, symptomatic improvement was observed with add-on Gaviscon (51%) but there was no significant difference in response vs placebo (48%) (OR (95% CI) 1.15 (0.69-1.91), P=.5939).\n\n\nCONCLUSIONS\nAdding Gaviscon to PPI reduced breakthrough GERD symptoms but a nearly equal response was observed for placebo. Response to intervention may vary according to whether symptoms are functional in origin.",
"title": ""
},
{
"docid": "d18ed4c40450454d6f517c808da7115a",
"text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.",
"title": ""
},
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
},
{
"docid": "f04a17b6e996be1828d666f70b055c46",
"text": "Machine learning methods are becoming integral to scientific inquiry in numerous disciplines. We demonstrated that machine learning can be used to predict the performance of a synthetic reaction in multidimensional chemical space using data obtained via high-throughput experimentation. We created scripts to compute and extract atomic, molecular, and vibrational descriptors for the components of a palladium-catalyzed Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline in the presence of various potentially inhibitory additives. Using these descriptors as inputs and reaction yield as output, we showed that a random forest algorithm provides significantly improved predictive performance over linear regression analysis. The random forest model was also successfully applied to sparse training sets and out-of-sample prediction, suggesting its value in facilitating adoption of synthetic methodology.",
"title": ""
},
{
"docid": "582fc5f68422cf5ac35c526a905d6f42",
"text": "In this paper I present a review of the different forms of network security in place in the world today. It is a tutorial type of paper and it especially deals with cryptographic algorithms, security protocols, authentication issues, end to end security solutions with a host of other network security issues. I compile these into a general purview of this topic and then I go in detail regarding the issues involved. I first focus on the state of Network security in the world today after explaining the need for network security. After highlighting these, I will be looking into the different types of Network security used. This part is quite an extensive coverage into the various forms of Network security. Then, I will be highlighting the problems still facing computer networks followed by the latest research done in the areas of Computer and Network Security.",
"title": ""
},
{
"docid": "205880d3205cb0f4844c20dcf51c4890",
"text": "Recently, deep networks were proved to be more effective than shallow architectures to face complex real–world applications. However, theoretical results supporting this claim are still few and incomplete. In this paper, we propose a new topological measure to study how the depth of feedforward networks impacts on their ability of implementing high complexity functions. Upper and lower bounds on network complexity are established, based on the number of hidden units and on their activation functions, showing that deep architectures are able, with the same number of resources, to address more difficult classification problems.",
"title": ""
},
{
"docid": "81d82cd481ee3719c74d381205a4a8bb",
"text": "Consider a set of <italic>S</italic> of <italic>n</italic> data points in real <italic>d</italic>-dimensional space, R<supscrpt>d</supscrpt>, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess <italic>S</italic> into a data structure, so that given any query point <italic>q</italic><inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, is the closest point of S to <italic>q</italic> can be reported quickly. Given any positive real ε, data point <italic>p</italic> is a (1 +ε)-<italic>approximate nearest neighbor</italic> of <italic>q</italic> if its distance from <italic>q</italic> is within a factor of (1 + ε) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of <italic>n</italic> points in R<supscrpt>d</supscrpt> in <italic>O(dn</italic> log <italic>n</italic>) time and <italic>O(dn)</italic> space, so that given a query point <italic> q</italic> <inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, and ε > 0, a (1 + ε)-approximate nearest neighbor of <italic>q</italic> can be computed in <italic>O</italic>(<italic>c</italic><subscrpt><italic>d</italic>, ε</subscrpt> log <italic>n</italic>) time, where <italic>c<subscrpt>d,ε</subscrpt></italic>≤<italic>d</italic> <inline-equation> <f><fen lp=\"ceil\">1 + 6d/<g>e</g><rp post=\"ceil\"></fen></f></inline-equation>;<supscrpt>d</supscrpt> is a factor depending only on dimension and ε. In general, we show that given an integer <italic>k</italic> ≥ 1, (1 + ε)-approximations to the <italic>k</italic> nearest neighbors of <italic>q</italic> can be computed in additional <italic>O(kd</italic> log <italic>n</italic>) time.",
"title": ""
},
{
"docid": "044a73d9db2f61dc9b4f9de0bdaa1b3f",
"text": "Traditionally employed human-to-human and human-to-machine communication has recently been replaced by a new trend known as the Internet of things (IoT). IoT enables device-to-device communication without any human intervention, hence, offers many challenges. In this paradigm, machine’s self-sustainability due to limited energy capabilities presents a great challenge. Therefore, this paper proposed a low-cost energy harvesting device using rectenna to mitigate the problem in the areas where battery constraint issues arise. So, an energy harvester is designed, optimized, fabricated, and characterized for energy harvesting and IoT applications which simply recycles radio-frequency (RF) energy at 2.4 GHz, from nearby Wi-Fi/WLAN devices and converts them to useful dc power. The physical model comprises of antenna, filters, rectifier, and so on. A rectangular patch antenna is designed and optimized to resonate at 2.4 GHz using the well-known transmission-line model while the band-pass and low-pass filters are designed using lumped components. Schottky diode (HSMS-2820) is used for rectification. The circuit is designed and fabricated using the low-cost FR4 substrate (<inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula> = 16 mm and <inline-formula> <tex-math notation=\"LaTeX\">$\\varepsilon _{r} = 4.6$ </tex-math></inline-formula>) having the fabricated dimensions of 285 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times \\,\\,90$ </tex-math></inline-formula> mm. Universal software radio peripheral and GNU Radio are employed to measure the received RF power, while similar measurements are carried out using R&S spectrum analyzer for validation. The received measured power is −64.4 dBm at the output port of the rectenna circuit. Hence, our design enables a pervasive deployment of self-operable next-generation IoT devices.",
"title": ""
},
{
"docid": "f91479717316e55152b98eec80472b12",
"text": "Language in social media is a dynamic system, constantly evolving and adapting, with words and concepts rapidly emerging, disappearing, and changing their meaning. These changes can be estimated using word representations in context, over time and across locations. A number of methods have been proposed to track these spatiotemporal changes but no general method exists to evaluate the quality of these representations. Previous work largely focused on qualitative evaluation, which we improve by proposing a set of visualizations that highlight changes in text representation over both space and time. We demonstrate usefulness of novel spatiotemporal representations to explore and characterize specific aspects of the corpus of tweets collected from European countries over a two-week period centered around the terrorist attacks in Brussels in March 2016. In addition, we quantitatively evaluate spatiotemporal representations by feeding them into a downstream classification task – event type prediction. Thus, our work is the first to provide both intrinsic (qualitative) and extrinsic (quantitative) evaluation of text representations for spatiotemporal trends.",
"title": ""
},
{
"docid": "99463a031385cbc677e441b8aee87998",
"text": "Having a parent with a mental illness can create considerable risks in the mental health and wellbeing of children. While intervention programs have been used effectively to reduce children’s psychopathology, particularly those whose parents have a specific diagnosis, little is known about the effectiveness of these early interventions for the wellbeing of children of parents who have a mental illness from a broad range of parents. Here we report on an evaluation of CHAMPS (Children And Mentally ill ParentS), a pilot intervention program offered in two formats (school holiday and after school peer support programs) to children aged 8-12 whose parents have a mental illness. The wellbeing of 69 children was evaluated at the beginning of the programs and four weeks after program completion, on instruments examining self-esteem, coping skills, connections (total, within and outside the family) and relationship problems (total, within and outside the family). Post intervention, there were significant improvements in self-esteem, coping and connections within the family, and reductions in relationship problems. The impact on children’s wellbeing differed according to the intensity of the program (consecutive days or weekly program). The results are discussed in the context of providing interventions for children whose parents have a mental illness and the implications for service provision generally.",
"title": ""
},
{
"docid": "3cf9d0c8f74248f2b150682f3b5127eb",
"text": "Signal Temporal Logic (STL) is a formalism used to rigorously specify requirements of cyberphysical systems (CPS), i.e., systems mixing digital or discrete components in interaction with a continuous environment or analog components. STL is naturally equipped with a quantitative semantics which can be used for various purposes: from assessing the robustness of a specification to guiding searches over the input and parameter space with the goal of falsifying the given property over system behaviors. Algorithms have been proposed and implemented for offline computation of such quantitative semantics, but only few methods exist for an online setting, where one would want to monitor the satisfaction of a formula during simulation. In this paper, we formalize a semantics for robust online monitoring of partial traces, i.e., traces for which there might not be enough data to decide the Boolean satisfaction (and to compute its quantitative counterpart). We propose an efficient algorithm to compute it and demonstrate its usage on two large scale real-world case studies coming from the automotive domain and from CPS education in a Massively Open Online Course (MOOC) setting. We show that savings in computationally expensive simulations far outweigh any overheads incurred by an online approach.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "cabf420400bc46a00ee062c5d6a850a7",
"text": "In the last years, automotive systems evolved to be more and more software-intensive systems. As a result, consider able attention has been paid to establish an efficient softwa re development process of such systems, where reliability is an important criterion. Hence, model-driven development (MDD), software engineering and requirements engineering (amongst others) found their way into the systems engineering domain. However, one important aspect regarding the reliability of such systems, has been largely neglected on a holistic level: the IT security. In this paper, we introduce a potential approach for integrating IT security in the requirements engineering process of automotive software development using function net modeling.",
"title": ""
},
{
"docid": "e5e817d6cadc18d280d912fea42cdd9a",
"text": "Recent discoveries of geographical patterns in microbial distribution are undermining microbiology's exclusively ecological explanations of biogeography and their fundamental assumption that 'everything is everywhere: but the environment selects'. This statement was generally promulgated by Dutch microbiologist Martinus Wilhelm Beijerinck early in the twentieth century and specifically articulated in 1934 by his compatriot, Lourens G. M. Baas Becking. The persistence of this precept throughout twentieth-century microbiology raises a number of issues in relation to its formulation and widespread acceptance. This paper will trace the conceptual history of Beijerinck's claim that 'everything is everywhere' in relation to a more general account of its theoretical, experimental and institutional context. His principle also needs to be situated in relationship to plant and animal biogeography, which, this paper will argue, forms a continuum of thought with microbial biogeography. Finally, a brief overview of the contemporary microbiological research challenging 'everything is everywhere' reveals that philosophical issues from Beijerinck's era of microbiology still provoke intense discussion in twenty-first century investigations of microbial biogeography.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
scidocsrr
|
75962b8b433fdf4f7a6ce796e46bd558
|
Association of breakfast intake with obesity, dietary and physical activity behavior among urban school-aged adolescents in Delhi, India: results of a cross-sectional study
|
[
{
"docid": "b26882cddec1690e3099757e835275d2",
"text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.",
"title": ""
}
] |
[
{
"docid": "8de0dd3319971a5991a1649b3ae8e1c2",
"text": "Increased intracranial pressure (ICP) is a pathologic state common to a variety of serious neurologic conditions, all of which are characterized by the addition of volume to the intracranial vault. Hence all ICP therapies are directed toward reducing intracranial volume. Elevated ICP can lead to brain damage or death by two principle mechanisms: (1) global hypoxic-ischemic injury, which results from reduction of cerebral perfusion pressure (CPP) and cerebral blood ̄ow, and (2) mechanical compression, displacement, and herniation of brain tissue, which results from mass effect associated with compartmentalized ICP gradients. In unmonitored patients with acute neurologic deterioration, head elevation (30 degrees), hyperventilation (pCO2 26±30 mmHg), and mannitol (1.0±1.5 g/kg) can lower ICP within minutes. Fluid-coupled ventricular catheters and intraparenchymal pressure transducers are the most accurate and reliable devices for measuring ICP in the intensive care unit (ICU) setting. In a monitored patient, treatment of critical ICP elevation (>20 mmHg) should proceed in the following steps: (1) consideration of repeat computed tomography (CT) scanning or consideration of de®nitive neurosurgical intervention, (2) intravenous sedation to attain a quiet, motionless state, (3) optimization of CPP to levels between 70 and 110 mmHg, (4) osmotherapy with mannitol or hypertonic saline, (5) hyperventilation (pCO2 26±30 mmHg), (6) high-dose pentobarbital therapy, and (7) systemic cooling to attain moderate hypothermia (32±33°C). Placement of an ICP monitor and use of a stepwise treatment algorithm are both essential for managing ICP effectively in the ICU setting. Increased intracranial pressure (ICP) can result from a number of insults to the brain, including traumatic brain injury (TBI), stroke, encephalitis, neoplasms, and abscesses (Table 1). The fundamental abnormality common to these diverse disease states is an increase in intracranial volume. Accordingly, all treatments for elevated ICP work by reducing intracranial volume. Prompt recognition and treatment of elevated ICP is essential because sustained elevated ICP can cause brain damage or be rapidly fatal.",
"title": ""
},
{
"docid": "601e8d9336f329304436512cfa010634",
"text": "This papers consists of two parts. The first is a critical review of prior art on adversarial learning, i) identifying some significant limitations of previous works, which have focused mainly on attack exploits and ii) proposing novel defenses against adversarial attacks. The second part is an experimental study considering the adversarial active learning scenario and an investigation of the efficacy of a mixed sample selection strategy for combating an adversary who attempts to disrupt the classifier learning.",
"title": ""
},
{
"docid": "112ecbb8547619577962298fbe65eae1",
"text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box, Category-Partition testing, we propose a methodology and a tool based on machine learning that has shown promising results on a case study involving students as testers. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7140f8152de03babecf774149722ff58",
"text": "We study techniques for monitoring and understanding real-world human activities, in particular of drivers, from distributed vision sensors. Real-time and early prediction of maneuvers is emphasized, specifically overtake and brake events. Study this particular domain is motivated by the fact that early knowledge of driver behavior, in concert with the dynamics of the vehicle and surrounding agents, can help to recognize dangerous situations. Furthermore, it can assist in developing effective warning and driver assistance systems. Multiple perspectives and modalities are captured and fused in order to achieve a comprehensive representation of the scene. Temporal activities are learned from a multi-camera head pose estimation module, hand and foot tracking, ego-vehicle parameters, lane and road geometry analysis, and surround vehicle trajectories. The system is evaluated on a challenging dataset of naturalistic driving in real-world settings. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "647f8e9ece2c7663e2b8767f0694fec5",
"text": "Modern retrieval systems are often driven by an underlying machine learning model. The goal of such systems is to identify and possibly rank the few most relevant items for a given query or context. Thus, such systems are typically evaluated using a ranking-based performance metric such as the area under the precision-recall curve, the Fβ score, precision at fixed recall, etc. Obviously, it is desirable to train such systems to optimize the metric of interest. In practice, due to the scalability limitations of existing approaches for optimizing such objectives, large-scale retrieval systems are instead trained to maximize classification accuracy, in the hope that performance as measured via the true objective will also be favorable. In this work we present a unified framework that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives. We demonstrate the advantage of our approach on several real-life retrieval problems that are significantly larger than those considered in the literature, while achieving substantial improvement in performance over the accuracyobjective baseline. Proceedings of the 20 International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, Florida, USA. JMLR: W&CP volume 54. Copyright 2017 by the author(s).",
"title": ""
},
{
"docid": "49b1eebc6e8f1cbcf5b5299e44ee17b9",
"text": "The performance of a solar photovoltaic array (SPVA) is dependent upon the temperature and irradiance level and it is necessary to study the characteristics of photovoltaic (PV) array. To utilize PV power or extract maximum power from PV array, the maximum power point tracking (MPPT) technique is essential to study and implement. In this paper, an equivalent electrical circuit of PV system has been modelled and its characteristics are studied. Response of the PV array with different irradiance level is also obtained. Incremental conductance algorithm of MPPT is modelled and developed to extract maximum power from PV array. The design of dc-dc boost converter has been carried out. It is observed that maximum power from PV array is achieved at input and output side of the dc-dc converter. The simulation work is done in MATLAB/SIMULINK environment.",
"title": ""
},
{
"docid": "dfa611e19a3827c66ea863041a3ef1e2",
"text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.",
"title": ""
},
{
"docid": "86d725fa86098d90e5e252c6f0aaab3c",
"text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.",
"title": ""
},
{
"docid": "f1325dd1350acf612dc1817db693a3d6",
"text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.",
"title": ""
},
{
"docid": "3bfef5a2c10f466a774d5ca3c7eb98dc",
"text": "Article history: Available online 2 April 2013",
"title": ""
},
{
"docid": "b5c2d3295cd563983c81e048e59d6541",
"text": "In this paper, a real-time Human-Computer Interaction (HCI) based on the hand data glove and K-NN classifier for gesture recognition is proposed. HCI is moving more and more natural and intuitive way to be used. One of the important parts of our body is our hand which is most frequently used for the Interaction in Digital Environment and thus complexity and flexibility of motion of hands are the research topics. To recognize these hand gestures more accurately and successfully data glove is used. Here, gloves are used to capture current position of the hand and the angles between the joints and then these features are used to classify the gestures using K-NN classifier. The gestures classified are categorized as clicking, rotating, dragging, pointing and ideal position. Recognizing these gestures relevant actions are taken, such as air writing and 3D sketching by tracking the path helpful in virtual augmented reality (VAR). The results show that glove used for interaction is better than normal static keyboard and mouse as the interaction process is more accurate and natural in dynamic environment with no distance limitations. Also it enhances the user’s interaction and immersion feeling.",
"title": ""
},
{
"docid": "e23cebac640a47643b3a3249eae62f89",
"text": "Objective: To assess the factors that contribute to impaired quinine clearance in acute falciparum malaria. Patients: Sixteen adult Thai patients with severe or moderately severe falciparum malaria were studied, and 12 were re-studied during convalescence. Methods: The clearance of quinine, dihydroquinine (an impurity comprising up to 10% of commercial quinine formulations), antipyrine (a measure of hepatic mixed-function oxidase activity), indocyanine green (ICG) (a measure of liver blood flow), and iothalamate (a measure of glomerular filtration rate) were measured simultaneously, and the relationship of these values to the␣biotransformation of quinine to the active metabolite 3-hydroxyquinine was assessed. Results: During acute malaria infection, the systemic clearance of quinine, antipyrine and ICG and the biotransformation of quinine to 3-hydroxyquinine were all reduced significantly when compared with values during convalescence. Iothalamate clearance was not affected significantly and did not correlate with the clearance of any of the other compounds. The clearance of total and free quinine correlated significantly with antipyrine clearance (r s = 0.70, P = 0.005 and r s = 0.67, P = 0.013, respectively), but not with ICG clearance (r s = 0.39 and 0.43 respectively, P > 0.15). In a multiple regression model, antipyrine clearance and plasma protein binding accounted for 71% of the variance in total quinine clearance in acute malaria. The pharmacokinetic properties of dihydroquinine were generally similar to those of quinine, although dihydroquinine clearance was less affected by acute malaria. The mean ratio of quinine to 3-hydroxyquinine area under the plasma concentration-time curve (AUC) values in acute malaria was 12.03 compared with 6.92 during convalescence P=0.01. The mean plasma protein binding of 3-hydroxyquinine was 46%, which was significantly lower than that of quinine (90.5%) or dihydroquinine (90.5%). Conclusion: The reduction in quinine clearance in acute malaria results predominantly from a disease-induced dysfunction in hepatic mixed-function oxidase activity (principally CYP 3A) which impairs the conversion of quinine to its major metabolite, 3-hydroxyquinine. The metabolite contributes approximately 5% of the antimalarial activity of the parent compound in malaria, but up to 10% during convalescence.",
"title": ""
},
{
"docid": "e2b153aba78b2831a7f1ecc1b26e0fc9",
"text": "Recent gene expression profiling of breast cancer has identified specific subtypes with clinical, biologic, and therapeutic implications. The basal-like group of tumors is characterized by an expression signature similar to that of the basal/myoepithelial cells of the breast and is reported to have transcriptomic characteristics similar to those of tumors arising in BRCA1 germline mutation carriers. They are associated with aggressive behavior and poor prognosis, and typically do not express hormone receptors or HER-2 (\"triple-negative\" phenotype). Therefore, patients with basal-like cancers are unlikely to benefit from currently available targeted systemic therapy. Although basal-like tumors are characterized by distinctive morphologic, genetic, immunophenotypic, and clinical features, neither an accepted consensus on routine clinical identification and definition of this aggressive subtype of breast cancer nor a way of systematically classifying this complex group of tumors has been described. Different definitions are, therefore, likely to produce variable and contradictory results that may hamper consistent identification and development of treatment strategies for these tumors. In this review, we discuss definition, heterogeneity, morphologic spectrum, relation to BRCA1, and clinical significance of this important class of breast cancer.",
"title": ""
},
{
"docid": "2e7bc1cc2f4be94ad0e4bce072a9f98a",
"text": "Glycosylation plays an important role in ensuring the proper structure and function of most biotherapeutic proteins. Even small changes in glycan composition, structure, or location can have a drastic impact on drug safety and efficacy. Recently, glycosylation has become the subject of increased focus as biopharmaceutical companies rush to create not only biosimilars, but also biobetters based on existing biotherapeutic proteins. Against this backdrop of ongoing biopharmaceutical innovation, updated methods for accurate and detailed analysis of protein glycosylation are critical for biopharmaceutical companies and government regulatory agencies alike. This review summarizes current methods of characterizing biopharmaceutical glycosylation, including compositional mass profiling, isomer-specific profiling and structural elucidation by MS and hyphenated techniques.",
"title": ""
},
{
"docid": "28d573b9b32a8f95618a01f1e5e43a01",
"text": "When trying to satisfy an information need, smartphone users frequently transition from mobile search engines to mobile apps and vice versa. However, little is known about the nature of these transitions nor how mobile search and mobile apps interact. We report on a 2-week, mixed-method study involving 18 Android users, where we collected real-world mobile search and mobile app usage data alongside subjective insights on why certain interactions between apps and mobile search occur. Our results show that when people engage with mobile search they tend to interact with more mobile apps and for longer durations. We found that certain categories of apps are used more intensely alongside mobile search. Furthermore we found differences in app usage before and after mobile search and show how mobile app interactions can both prompt mobile search and enable users to take action. We conclude with a discussion on what these patterns mean for mobile search and how we might design mobile search experiences that take these app interactions into account.",
"title": ""
},
{
"docid": "5df96510354ee3b37034a99faeff4956",
"text": "In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1score.",
"title": ""
},
{
"docid": "938f8383d25d30b39b6cd9c78d1b3ab5",
"text": "In the last two decades, the Lattice Boltzmann method (LBM) has emerged as a promising tool for modelling the Navier-Stokes equations and simulating complex fluid flows. LBM is based on microscopic models and mesoscopic kinetic equations. In some perspective, it can be viewed as a finite difference method for solving the Boltzmann transport equation. Moreover the Navier-Stokes equations can be recovered by LBM with a proper choice of the collision operator. In Section 2 and 3, we first introduce this method and describe some commonly used boundary conditions. In Section 4, the validity of this method is confirmed by comparing the numerical solution to the exact solution of the steady plane Poiseuille flow and convergence of solution is established. Some interesting numerical simulations, including the lid-driven cavity flow, flow past a circular cylinder and the Rayleigh-Bénard convection for a range of Reynolds numbers, are carried out in Section 5, 6 and 7. In Section 8, we briefly highlight the procedure of recovering the Navier-Stokes equations from LBM. A summary is provided in Section 9.",
"title": ""
},
{
"docid": "a57aa7ff68f7259a9d9d4d969e603dcd",
"text": "Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.",
"title": ""
},
{
"docid": "9b49a4673456ab8e9f14a0fe5fb8bcc7",
"text": "Legged robots offer the potential to navigate a wide variety of terrains that are inaccessible to wheeled vehicles. In this paper we consider the planning and control tasks of navigating a quadruped robot over a wide variety of challenging terrain, including terrain which it has not seen until run-time. We present a software architecture that makes use of both static and dynamic gaits, as well as specialized dynamic maneuvers, to accomplish this task. Throughout the paper we highlight two themes that have been central to our approach: 1) the prevalent use of learning algorithms, and 2) a focus on rapid recovery and replanning techniques; we present several novel methods and algorithms that we developed for the quadruped and that illustrate these two themes. We evaluate the performance of these different methods, and also present and discuss the performance of our system on the official Learning Locomotion tests.",
"title": ""
},
{
"docid": "b8aab94410391b0e2544f2d8b4a4891e",
"text": "In this paper, we present \"k-means+ID3\", a method to cascade k-means clustering and the ID3 decision tree learning methods for classifying anomalous and normal activities in a computer network, an active electronic circuit, and a mechanical mass-beam system. The k-means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, representing a density region of normal or anomaly instances, we build an ID3 decision tree. The decision tree on each cluster refines the decision boundaries by learning the subgroups within the cluster. To obtain a final decision on classification, the decisions of the k-means and ID3 methods are combined using two rules: 1) the nearest-neighbor rule and 2) the nearest-consensus rule. We perform experiments on three data sets: 1) network anomaly data (NAD), 2) Duffing equation data (DED), and 3) mechanical system data (MSD), which contain measurements from three distinct application domains of computer networks, an electronic circuit implementing a forced Duffing equation, and a mechanical system, respectively. Results show that the detection accuracy of the k-means+ID3 method is as high as 96.24 percent at a false-positive-rate of 0.03 percent on NAD; the total accuracy is as high as 80.01 percent on MSD and 79.9 percent on DED",
"title": ""
}
] |
scidocsrr
|
bcd14bd95cffc2c07861a8ae4136119f
|
Optimization of Robotic Arm Trajectory Using Genetic Algorithm
|
[
{
"docid": "3c82ba94aa4d717d51c99cfceb527f22",
"text": "Manipulator collision avoidance using genetic algorithms is presented. Control gains in the collision avoidance control model are selected based on genetic algorithms. A repulsive force is artificially created using the distances between the robot links and obstacles, which are generated by a distance computation algorithm. Real-time manipulator collision avoidance control has achieved. A repulsive force gain is introduced through the approaches for definition of link coordinate frames and kinematics computations. The safety distance between objects is affected by the repulsive force gain. This makes the safety zone adjustable and provides greater intelligence for robotic tasks under the ever-changing environment.",
"title": ""
}
] |
[
{
"docid": "9bec22bcbf1ab3071d65dd8b41d3cf51",
"text": "Omni-directional mobile platforms have the ability to move instantaneously in any direction from any configuration. As such, it is important to have a mathematical model of the platform, especially if the platform is to be used as an autonomous vehicle. Autonomous behaviour requires that the mobile robot choose the optimum vehicle motion in different situations for object/collision avoidance and task achievement. This paper develops and verifies a mathematical model of a mobile robot platform that implements mecanum wheels to achieve omni-directionality. The mathematical model will be used to achieve optimum autonomous control of the developed mobile robot as an office service robot. Omni-directional mobile platforms have improved performance in congested environments and narrow aisles, such as those found in factory workshops, offices, warehouses, hospitals, etc.",
"title": ""
},
{
"docid": "a9f309abd4711ad5c73c8ba8e80a1c76",
"text": "The Open Networking Foundation's Extensibility Working Group is standardizing OpenFlow, the main software-defined networking (SDN) protocol. To address the requirements of a wide range of network devices and to accommodate its all-volunteer membership, the group has made the specification process highly dynamic and similar to that of open source projects.",
"title": ""
},
{
"docid": "dcd6effc28744aa875a37ad28ecc68e1",
"text": "The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.",
"title": ""
},
{
"docid": "beb8cb3566af719308c9ec249c955ff0",
"text": " Abstract—This article presents the review of the computing models applied for solving problems of midterm load forecasting. The load forecasting results can be used in electricity generation such as energy reservation and maintenance scheduling. Principle, strategy and results of short term, midterm, and long term load forecasting using statistic methods and artificial intelligence technology (AI) are summaried, Which, comparison between each method and the articles have difference feature input and strategy. The last, will get the idea or literature review conclusion to solve the problem of mid term load forecasting (MTLF).",
"title": ""
},
{
"docid": "459dc066960760010b1157e4929d09f8",
"text": "A dynamical extension that makes possible the integration of a kinematic controller and a torque controller for nonholonomic mobile robots is presented. A combined kinematic/torque control law is developed using backstepping, and asymptotic stability is guaranteed by Lyapunov theory. Moreover, this control algorithm can be applied to the three basic nonholonomic navigation problems: tracking a reference trajectory, path following, and stabilization about a desired posture. The result is a general structure for controlling a mobile robot that can accommodate different control techniques, ranging from a conventional computed-torque controller, when all dynamics are known, to robust-adaptive controllers if this is not the case. A robust-adaptive controller based on neural networks (NNs) is proposed in this work. The NN controller can deal with unmodeled bounded disturbances and/or unstructured unmodeled dynamics in the vehicle. On-line NN weight tuning algorithms that do not require off-line learning yet guarantee small tracking errors and bounded control signals are utilized. 1997 John Wiley & Sons, Inc.",
"title": ""
},
{
"docid": "a20b887faf0df752ed4d74861634405b",
"text": "A high frequency, high efficiency bi-directional battery charger for Plug-in Hybrid Electric Vehicle (PHEV) is built with high voltage normally-off GaN-on-Si HFETs. This paper characterized the multi-chip-model both statically and dynamically. The optimal design of the isolated 500 kHz Dual Active Bridge DC/DC stage is detailed, taking account the wide battery voltage range and sinusoidal charging, to eliminate large DC link capacitor. Experimentally result shows a 500 kHz DAB converter with discrete inductor and transformer can achieved 97.2% efficiency at 1kW and 96.4% efficiency at 2.4 kW. By integrating the inductor into the transformer, 98.2% efficiency is achieved at 1 kW.",
"title": ""
},
{
"docid": "f8f36ef5822446478b154c9d98847070",
"text": "The objective of this research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones. Road surface condition information is seen useful for both travellers and for the road network maintenance. The problem we consider is to detect road surface anomalies that, when left unreported, can cause wear of vehicles, lesser driving comfort and vehicle controllability, or an accident. In this work we developed a pattern recognition system for detecting road condition from accelerometer and GPS readings. We present experimental results from real urban driving data that demonstrate the usefulness of the system. Our contributions are: 1) Performing a throughout spectral analysis of tri-axis acceleration signals in order to get reliable road surface anomaly labels. 2) Comprehensive preprocessing of GPS and acceleration signals. 3) Proposing a speed dependence removal approach for feature extraction and demonstrating its positive effect in multiple feature sets for the road surface anomaly detection task. 4) A framework for visually analyzing the classifier predictions over the validation data and labels.",
"title": ""
},
{
"docid": "24855976195933799d110122cbbbe6d5",
"text": "Association of audio events with video events presents a challenge to a typical camera-microphone approach in order to capture AV signals from a large distance. Setting up a long range microphone array and performing geo-calibration of both audio and video sensors is difficult. In this work, in addition to a geo-calibrated electro-optical camera, we propose to use a novel optical sensor a Laser Doppler Vibrometer (LDV) for real-time audio sensing, which allows us to capture acoustic signals from a large distance, and to use the same geo-calibration for both the camera and the audio (via LDV). We have promising preliminary results on association of the audio recording of speech with the video of the human speaker.",
"title": ""
},
{
"docid": "0e92556cf50fa576b9c16a7f55eb1d18",
"text": "We report very wideband 100 MHz to 1 GHz active circulator with high power operation up to 30 dBm for the first time. In order to achieve broadband high power circulation and isolation, a new architecture is developed using low-loss RF choke concept and it is implemented on alumina substrate with mounted GaN HEMT devices along with other SMTs. The performed test shows minimum 15 to 20 dB isolation up to 30 dBm and 15 dB directivity up to 26 dBm across the band with 2.5~5 dB insertion loss for 15 dB of minimum directivity.",
"title": ""
},
{
"docid": "e77c7b9c486f895167c54b6724e9e3c8",
"text": "Many machine learning tasks can be expressed as the transformation—or transduction—of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since finding the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that returns a distribution over output sequences of all possible lengths and alignments for any input sequence. Experimental results are provided on the TIMIT speech corpus.",
"title": ""
},
{
"docid": "17dc2b08b63a10c70aa1fcfcf72071df",
"text": "In this paper, we introduce Adversarial-and-attention Network (A3Net) for Machine Reading Comprehension. This model extends existing approaches from two perspectives. First, adversarial training is applied to several target variables within the model, rather than only to the inputs or embeddings. We control the norm of adversarial perturbations according to the norm of original target variables, so that we can jointly add perturbations to several target variables during training. As an effective regularization method, adversarial training improves robustness and generalization of our model. Second, we propose a multi-layer attention network utilizing three kinds of high-efficiency attention mechanisms. Multi-layer attention conducts interaction between question and passage within each layer, which contributes to reasonable representation and understanding of the model. Combining these two contributions, we enhance the diversity of dataset and the information extracting ability of the model at the same time. Meanwhile, we construct A3Net for the WebQA dataset. Results show that our model outperforms the state-ofthe-art models (improving Fuzzy Score from 73.50% to 77.0%).",
"title": ""
},
{
"docid": "1f62f4d5b84de96583e17fdc0f4828be",
"text": "This study examined age differences in perceptions of online communities held by people who were not yet participating in these relatively new social spaces. Using the Technology Acceptance Model (TAM), we investigated the factors that affect future intention to participate in online communities. Our results supported the proposition that perceived usefulness positively affects behavioral intention, yet it was determined that perceived ease of use was not a significant predictor of perceived usefulness. The study also discovered negative relationships between age and Internet self-efficacy and the perceived quality of online community websites. However, the moderating role of age was not found. The findings suggest that the relationships among perceived ease of use, perceived usefulness, and intention to participate in online communities do not change with age. Theoretical and practical implications and limitations were discussed. ! 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4408d5fa31a64d54fbe4b4d70b18182b",
"text": "Using microarray analysis, this study showed up-regulation of toll-like receptors 1, 2, 4, 7, 8, NF-κB, TNF, p38-MAPK, and MHC molecules in human peripheral blood mononuclear cells following infection with Plasmodium falciparum. This analysis reports herein further studies based on time-course microarray analysis with focus on malaria-induced host immune response. The results show that in early malaria, selected immune response-related genes were up-regulated including α β and γ interferon-related genes, as well as genes of IL-15, CD36, chemokines (CXCL10, CCL2, S100A8/9, CXCL9, and CXCL11), TRAIL and IgG Fc receptors. During acute febrile malaria, up-regulated genes included α β and γ interferon-related genes, IL-8, IL-1b IL-10 downstream genes, TGFB1, oncostatin-M, chemokines, IgG Fc receptors, ADCC signalling, complement-related genes, granzymes, NK cell killer/inhibitory receptors and Fas antigen. During recovery, genes for NK receptorsand granzymes/perforin were up-regulated. When viewed in terms of immune response type, malaria infection appeared to induce a mixed TH1 response, in which α and β interferon-driven responses appear to predominate over the more classic IL-12 driven pathway. In addition, TH17 pathway also appears to play a significant role in the immune response to P. falciparum. Gene markers of TH17 (neutrophil-related genes, TGFB1 and IL-6 family (oncostatin-M)) and THαβ (IFN-γ and NK cytotoxicity and ADCC gene) immune response were up-regulated. Initiation of THαβ immune response was associated with an IFN-αβ response, which ultimately resulted in moderate-mild IFN-γ achieved via a pathway different from the more classic IL-12 TH1 pattern. Based on these observations, this study speculates that in P. falciparum infection, THαβ/TH17 immune response may predominate over ideal TH1 response.",
"title": ""
},
{
"docid": "5acabeb9ebb369900f2f1cc585fbec6e",
"text": "Infection with human papillomavirus (HPV) is recognized as one of the major causes of infection-related cancer worldwide, as well as the causal factor in other diseases. Strong evidence for a causal etiology with HPV has been stated by the International Agency for Research on Cancer for cancers of the cervix uteri, penis, vulva, vagina, anus and oropharynx (including base of the tongue and tonsils). Of the estimated 12.7 million new cancers occurring in 2008 worldwide, 4.8% were attributable to HPV infection, with substantially higher incidence and mortality rates seen in developing versus developed countries. In recent years, we have gained tremendous knowledge about HPVs and their interactions with host cells, tissues and the immune system; have validated and implemented strategies for safe and efficacious prophylactic vaccination against HPV infections; have developed increasingly sensitive and specific molecular diagnostic tools for HPV detection for use in cervical cancer screening; and have substantially increased global awareness of HPV and its many associated diseases in women, men, and children. While these achievements exemplify the success of biomedical research in generating important public health interventions, they also generate new and daunting challenges: costs of HPV prevention and medical care, the implementation of what is technically possible, socio-political resistance to prevention opportunities, and the very wide ranges of national economic capabilities and health care systems. Gains and challenges faced in the quest for comprehensive control of HPV infection and HPV-related cancers and other disease are summarized in this review. The information presented may be viewed in terms of a reframed paradigm of prevention of cervical cancer and other HPV-related diseases that will include strategic combinations of at least four major components: 1) routine introduction of HPV vaccines to women in all countries, 2) extension and simplification of existing screening programs using HPV-based technology, 3) extension of adapted screening programs to developing populations, and 4) consideration of the broader spectrum of cancers and other diseases preventable by HPV vaccination in women, as well as in men. Despite the huge advances already achieved, there must be ongoing efforts including international advocacy to achieve widespread-optimally universal-implementation of HPV prevention strategies in both developed and developing countries. This article summarizes information from the chapters presented in a special ICO Monograph 'Comprehensive Control of HPV Infections and Related Diseases' Vaccine Volume 30, Supplement 5, 2012. Additional details on each subtopic and full information regarding the supporting literature references may be found in the original chapters.",
"title": ""
},
{
"docid": "d67e0fa20185e248a18277e381c9d42d",
"text": "Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.",
"title": ""
},
{
"docid": "cbf856284155b7ad6a48ca2fdc758df2",
"text": "We present an image caption system that addresses new challenges of automatically describing images in the wild. The challenges include generating high quality caption with respect to human judgments, out-of-domain data handling, and low latency required in many applications. Built on top of a state-of-the-art framework, we developed a deep vision model that detects a broad range of visual concepts, an entity recognition model that identifies celebrities and landmarks, and a confidence model for the caption output. Experimental results show that our caption engine outperforms previous state-of-the-art systems significantly on both in-domain dataset (i.e. MS COCO) and out-of-domain datasets. We also make the system publicly accessible as a part of the Microsoft Cognitive Services.",
"title": ""
},
{
"docid": "2ce7c776cd231117fecdf81f2e8d35a2",
"text": "The use of social media as a source of news is entering a new phase as computer algorithms are developed and deployed to detect, rank, and verify news. The efficacy and ethics of such technology are the subject of this article, which examines the SocialSensor application, a tool developed by a multidisciplinary EU research project. The results suggest that computer software can be used successfully to identify trending news stories, allow journalists to search within a social media corpus, and help verify social media contributors and content. However, such software also raises questions about accountability as social media is algorithmically filtered for use by journalists and others. Our analysis of the inputs SocialSensor relies on shows biases towards those who are vocal and have an audience, many of whom are men in the media. We also reveal some of the technology's temporal and topic preferences. The conclusion discusses whether such biases are necessary for systems like SocialSensor to be effective. The article also suggests that academic research has failed to fully recognise the changes to journalists' sourcing practices brought about by social media, particularly Twitter, and provides some countervailing evidence and an explanation for this failure. Introduction The ubiquity of computing in contemporary culture has resulted in human decision-making being augmented, and even partially replaced, by computational processes or algorithms using artificial intelligence and information-retrieval techniques. Such augmentation and substitution is already common, and even predominates, in some industries, such as financial trading and legal research. Frey and Osborne (2013) have attempted to predict the extent to which a wide spectrum of jobs is susceptible to computerisation. Although journalists were not included in their analysis, some of the activities undertaken by journalists—for example those carried out by interviewers, proofreaders, and copy markers—were, and had a greater than 50 per cent probability of being computerised. It is that potential for the automation of journalistic work that is explored in this article. Frey and Osborne remind us of how automation can be aggressively resisted by workers, giving the example of William Lee who, they say, was driven out of Britain by the guild of hosiers for inventing a machine that knitted stockings. Such resistance also exists in the context of journalistic automation. For example, the German Federation of Journalists have said they \" don't think it is … desirable that journalism is done with algorithms \" (Konstantin Dörr, personal communication, 6 February …",
"title": ""
},
{
"docid": "8750e04065d8f0b74b7fee63f4966e59",
"text": "The Customer churn is a crucial activity in rapidly growing and mature competitive telecommunication sector and is one of the greatest importance for a project manager. Due to the high cost of acquiring new customers, customer churn prediction has emerged as an indispensable part of telecom sectors’ strategic decision making and planning process. It is important to forecast customer churn behavior in order to retain those customers that will churn or possible may churn. This study is another attempt which makes use of rough set theory, a rule-based decision making technique, to extract rules for churn prediction. Experiments were performed to explore the performance of four different algorithms (Exhaustive, Genetic, Covering, and LEM2). It is observed that rough set classification based on genetic algorithm, rules generation yields most suitable performance out of the four rules generation algorithms. Moreover, by applying the proposed technique on publicly available dataset, the results show that the proposed technique can fully predict all those customers that will churn or possibly may churn and also provides useful information to strategic decision makers as well.",
"title": ""
},
{
"docid": "723bfb5acef53d78a05660e5d9710228",
"text": "Cheap micro-controllers, such as the Arduino or other controllers based on the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging from sensors networks to robotic submarines. In this paper, we investigate the feasibility of using the Arduino as a true random number generator (TRNG). The Arduino Reference Manual recommends using it to seed a pseudo random number generator (PRNG) due to its ability to read random atmospheric noise from its analog pins. This is an enticing application since true bits of entropy are hard to come by. Unfortunately, we show by statistical methods that the atmospheric noise of an Arduino is largely predictable in a variety of settings, and is thus a weak source of entropy. We explore various methods to extract true randomness from the micro-controller and conclude that it should not be used to produce randomness from its analog pins.",
"title": ""
},
{
"docid": "77d2e60d3b7ff96f3266015403b75f34",
"text": "Modern databases, guaranteeing atomicity and durability, store transaction logs in a volatile, central log buffer and then flush the log buffer to non-volatile storage by the write-ahead logging principle. Buffering logs in central log store has recently faced a severe multicore scalability problem, and log flushing has been challenged by synchronous I/O delay. We have designed and implemented a fast and scalable logging method, ELEDA, that can migrate a surge of transaction logs from volatile memory to stable storage without risking durable transaction atomicity. Our efficient implementation of ELEDA is enabled by a highly concurrent data structure, GRASSHOPPER, that eliminates a multicore scalability problem of centralized logging and enhances system utilization in the presence of synchronous I/O delay. We implemented ELEDA and plugged it to WiredTiger and Shore-MT by replacing their log managers. Our evaluation showed that ELEDA-based transaction systems improve performance up to 71ˆ, thus showing the applicability of ELEDA. PVLDB Reference Format: Hyungsoo Jung, Hyuck Han, and Sooyong Kang. Scalable Database Logging for Multicores. PVLDB, 11(2): 135 148, 2017. DOI: https://doi.org/10.14778/3149193.3149195",
"title": ""
}
] |
scidocsrr
|
c0ce9b9e33b5fa46c98b6e19a65dae9b
|
Twitter: who gets caught? observed trends in social micro-blogging spam
|
[
{
"docid": "fae9db6e3522ec00793613abc3617dcc",
"text": "Size, accessibility, and rate of growth of Online Social Media (OSM) has attracted cyber crimes through them. One form of cyber crime that has been increasing steadily is phishing, where the goal (for the phishers) is to steal personal information from users which can be used for fraudulent purposes. Although the research community and industry has been developing techniques to identify phishing attacks through emails and instant messaging (IM), there is very little research done, that provides a deeper understanding of phishing in online social media. Due to constraints of limited text space in social systems like Twitter, phishers have begun to use URL shortener services. In this study, we provide an overview of phishing attacks for this new scenario. One of our main conclusions is that phishers are using URL shorteners not only for reducing space but also to hide their identity. We observe that social media websites like Facebook, Habbo, Orkut are competing with e-commerce services like PayPal, eBay in terms of traffic and focus of phishers. Orkut, Habbo, and Facebook are amongst the top 5 brands targeted by phishers. We study the referrals from Twitter to understand the evolving phishing strategy. A staggering 89% of references from Twitter (users) are inorganic accounts which are sparsely connected amongst themselves, but have large number of followers and followees. We observe that most of the phishing tweets spread by extensive use of attractive words and multiple hashtags. To the best of our knowledge, this is the first study to connect the phishing landscape using blacklisted phishing URLs from PhishTank, URL statistics from bit.ly and cues from Twitter to track the impact of phishing in online social media.",
"title": ""
}
] |
[
{
"docid": "db9e401e4c2bdee1187389c340541877",
"text": "We show in this paper how some algebraic methods can be used for fingerprint matching. The described technique is able to compute the score of a match also when the template and test fingerprints have been not correctly acquired. In particular, the match is independent of translations, rotations and scaling transformations of the template. The technique is also able to compute a match score when part of the fingerprint image is incorrect or missed. The algorithm is being implemented in CoCoA, a computer algebra system for doing computations in Commutative Algebra.",
"title": ""
},
{
"docid": "aeef3eff9578d8bb1efdf3db59f39c16",
"text": "• NOTICE: this is the author's version of a work that was accepted for publication in Industrial Marketing Management. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published at: http://dx.doi.org/10.1016/j.indmarman.2011.09.009",
"title": ""
},
{
"docid": "1f097bfc1d41f2f828353f7853a532d0",
"text": "It has recently been demonstrated that mouse and human fibroblasts can be reprogrammed into an embryonic stem cell-like state by introducing combinations of four transcription factors. However, the therapeutic potential of such induced pluripotent stem (iPS) cells remained undefined. By using a humanized sickle cell anemia mouse model, we show that mice can be rescued after transplantation with hematopoietic progenitors obtained in vitro from autologous iPS cells. This was achieved after correction of the human sickle hemoglobin allele by gene-specific targeting. Our results provide proof of principle for using transcription factor-induced reprogramming combined with gene and cell therapy for disease treatment in mice. The problems associated with using retroviruses and oncogenes for reprogramming need to be resolved before iPS cells can be considered for human therapy.",
"title": ""
},
{
"docid": "b2612334017b1b342f025dce23fda554",
"text": "In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolutionbased technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25milliseconds for 67% and 76.6% of the syllable segments, respectively.",
"title": ""
},
{
"docid": "de2ed315762d3f0ac34fe0b77567b3a2",
"text": "A study in vitro of specimens of human aortic and common carotid arteries was carried out to determine the feasibility of direct measurement (i.e., not from residual lumen) of arterial wall thickness with B mode real-time imaging. Measurements in vivo by the same technique were also obtained from common carotid arteries of 10 young normal male subjects. Aortic samples were classified as class A (relatively normal) or class B (with one or more atherosclerotic plaques). In all class A and 85% of class B arterial samples a characteristic B mode image composed of two parallel echogenic lines separated by a hypoechoic space was found. The distance between the two lines (B mode image of intimal + medial thickness) was measured and correlated with the thickness of different combinations of tunicae evaluated by gross and microscopic examination. On the basis of these findings and the results of dissection experiments on the intima and adventitia we concluded that results of B mode imaging of intimal + medial thickness did not differ significantly from the intimal + medial thickness measured on pathologic examination. With respect to the accuracy of measurements obtained by B mode imaging as compared with pathologic findings, we found an error of less than 20% for measurements in 77% of normal and pathologic aortic walls. In addition, no significant difference was found between B mode-determined intimal + medial thickness in the common carotid arteries evaluated in vitro and that determined by this method in vivo in young subjects, indicating that B mode imaging represents a useful approach for the measurement of intimal + medial thickness of human arteries in vivo.",
"title": ""
},
{
"docid": "6e82e635682cf87a84463f01c01a1d33",
"text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.",
"title": ""
},
{
"docid": "5ffb9d27145e5721417c0241765f16fc",
"text": "In this paper we present the methodology for word sense disambiguation based on domain information. Domain is a set of words in which there is a strong semantic relation among the words. The words in the sentence contribute to determine the domain of the sentence. The availability of WordNet domains makes the domain-oriented text analysis possible. The domain of the target word can be fixed based on the domains of the content words in the local context. This approach can be effectively used to disambiguate nouns. We present the unsupervised approach to word sense disambiguation using the WordNet domains. The model determines the domain of the target word and the sense corresponding to this domain is taken as the correct sense. We have used the WordNet domains 3.1 as lexical database.",
"title": ""
},
{
"docid": "4bdbfadd585481e86c2a3f48c60b2bbf",
"text": "As businesses in every area face intense competition, advanced analytics will help improving their profitability and gain a competitive advantage by enhancing customer experience. Traditionally the analysis process has been done in off-line mode, by using Data Warehouse Technologies combined with BI tools. That is not enough anymore. Today, big data is becoming a business imperative. The benefits of big data have been deeply analyzed in many articles and reports during the past years. And it is evident that such increased value is something tangible in every area. Just to mention a few, this includes the energy sector, the financial services, the telecommunication, the transport, healthcare and education, etc. In this article, we intend to understand better what Big Data Technology is all about, the benefits that they bring to the society, focusing in particular to the telecom industry. The experimental environment is set up. Installation and configuration of the platform for data management and analytics will allow us to access all the sample data possessed by a telecom operator in a single platform. Big Data technology will be used to a telecom operator to find out pattern and reasons of call drop in real time and send customers’ apology text message and also refund money for dropped calls resulting in improved customer satisfaction and brand value.",
"title": ""
},
{
"docid": "db434a6815fe963beedbec2078979543",
"text": "Effective regulation of affect: An action control perspective on emotion regulation Thomas L. Webb a , Inge Schweiger Gallo b , Eleanor Miles a , Peter M. Gollwitzer c d & Paschal Sheeran a a Department of Psychology, University of Sheffield, Sheffield, UK b Departamento de Psicología Social, Universidad Complutense de Madrid, Madrid, Spain c Department of Psychology, New York University, New York, USA d Department of Psychology, University of Konstanz, Konstanz, Germany",
"title": ""
},
{
"docid": "a64c730c68e431a0cfa3f8b295f4aa59",
"text": "The use of robots in underwater exploration is increasing in the last years. The automation of the monitoring, inspection and underwater maintenance tasks require a good mapping and localization system. One of the key issues of these systems is how to summarize the sensory information in order to recognize an area that has already been visited. This paper proposes a description method of acoustic images acquired by a forward looking sonar (FLS) using a graph of Gaussian probability density function. This structure represents both shape and topological relation. Furthermore, we also presented a method to match the descriptors in a efficient way. We evaluated the method in a real dataset acquired by a underwater vehicle performing autonomous navigation and mapping tasks in a marine environment.",
"title": ""
},
{
"docid": "cf374e1d1fa165edaf0b29749f32789c",
"text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.",
"title": ""
},
{
"docid": "e2fb4ed617cffabba2f28b95b80a30b3",
"text": "The importance of information security education, information security training, and information security awareness in organisations cannot be overemphasised. This paper presents working definitions for information security education, information security training and information security awareness. An investigation to determine if any differences exist between information security education, information security training and information security awareness was conducted. This was done to help institutions understand when they need to train or educate employees and when to introduce information security awareness programmes. A conceptual analysis based on the existing literature was used for proposing working definitions, which can be used as a reference point for future information security researchers. Three important attributes (namely focus, purpose and method) were identified as the distinguishing characteristics of information security education, information security training and information security awareness. It was found that these information security concepts are different in terms of their focus, purpose and methods of delivery.",
"title": ""
},
{
"docid": "3bda2eb4b203c904bb838175b382c2c7",
"text": "In this digital era, ICT use in the classroom is important for giving students opportunities to learn and apply the required 21st century skills. Hence studying the issues and challenges related to ICT use in teaching and learning can assist teachers in overcoming the obstacles and become successful technology users. Therefore, the main purpose of this study is to analyze teachers’ perceptions of the challenges faced in using ICT tools in classrooms. A quantitative research design was used to collect the data randomly from a sample of 100 secondary school teachers in the state of Melaka, Malaysia. Evidence has been collected through distribution of a modified‐ adopted survey questionnaire. Overall, the key issues and challenges found to be significant in using ICT tools by teachers were: limited accessibility and network connection, limited technical support, lack of effective training, limited time and lack of teachers’ competency. Moreover, the results from independent t‐ test show that use of ICT tools by male teachers (M =2.08, SD = .997) in the classroom is higher compared to female teachers (M = 2.04, SD = .992). It is hoped that the outcome of this research provides proper information and recommendation to those responsible for integrating new technologies into the school teaching and learning process.",
"title": ""
},
{
"docid": "84440568cdae970ba532df5501ff7781",
"text": "Present work deals with the biotechnological production of fuel ethanol from different raw materials. The different technologies for producing fuel ethanol from sucrose-containing feedstocks (mainly sugar cane), starchy materials and lignocellulosic biomass are described along with the major research trends for improving them. The complexity of the biomass processing is recognized through the analysis of the different stages involved in the conversion of lignocellulosic complex into fermentable sugars. The features of fermentation processes for the three groups of studied feedstocks are discussed. Comparative indexes for the three major types of feedstocks for fuel ethanol production are presented. Finally, some concluding considerations on current research and future tendencies in the production of fuel ethanol regarding the pretreatment and biological conversion of the feedstocks are presented.",
"title": ""
},
{
"docid": "2c3768ce9b2801e8a2ef50ccdfbfa3d3",
"text": "Kelly's Criterion is well known among gamblers and investors as a method for maximizing the returns one would expect to observe over long periods of betting or investing. These ideas are conspicuously absent from portfolio optimization problems in the financial and automation literature. This paper will show how Kelly's Criterion can be incorporated into standard portfolio optimization models. The model developed here combines risk and return into a single objective function by incorporating a risk parameter. This model is then solved for a portfolio of 10 stocks from a major stock exchange using a differential evolution algorithm. Monte Carlo calculations are used to verify the accuracy of the results obtained from differential evolution. The results show that evolutionary algorithms can be successfully applied to solve a portfolio optimization problem where returns are calculated by applying Kelly's Criterion to each of the assets in the portfolio.",
"title": ""
},
{
"docid": "a1bc239d1de92df58caa1ee98f360a74",
"text": "We propose a planar V-band beam-steering antenna based on a millimeter-wave (mm-wave) system-on-package technology using advanced thin-film technology on a silicon mother board. The thin-film substrate has the capability of integrated passive elements and flip-chip interconnection. Space-consuming components such as a microstrip Rotman lens and patch antennas are implemented on a low-loss, low-cost, and low-dielectric-constant material, benzocyclobutane. V-band monolithic microwave integrated circuits, a power amplifier, and a SP4T switch, are flip-chipped on the thin-film substrate while minimizing parasitic effects. The fabricated antenna module shows if-plane beam steering at four angles, plusmn6deg and plusmn20deg. The measured effective isotropic radiated power is in the range of 17.3-18 dBm. To our knowledge, this is the first demonstration of a planar mm-wave beam-steering antenna module on a thin-film substrate incorporating integrated terminating loads and flip-chip interconnections.",
"title": ""
},
{
"docid": "b1932eb235932a45a6bd533876ee3867",
"text": "Received: 22 October 2009 Revised: 5 September 2011 Accepted: 16 September 2011 Abstract Enterprise Content Management (ECM) focuses on managing all types of content being used in organizations. It is a convergence of previous approaches that focus on managing only particular types of content, as for example documents or web pages. In this paper, we present an overview of previous research by categorizing the existing literature. We show that scientific literature on ECM is limited and there is no consensus on the definition of ECM. Therefore, the literature review surfaced several ECM definitions that we merge into a more consistent and comprehensive definition of ECM. The Functional ECM Framework (FEF) provides an overview of the potential functionalities of ECM systems (ECMSs). We apply the FEF in three case studies. The FEF can serve to communicate about ECMSs, to understand them and to direct future research. It can also be the basis for a more formal reference architecture and it can be used as an assessment tool by practitioners for comparing the functionalities provided by existing ECMSs. European Journal of Information Systems (2011) advance online publication, 25 October 2011; doi:10.1057/ejis.2011.41",
"title": ""
},
{
"docid": "32faa5a14922d44101281c783cf6defb",
"text": "A novel multifocus color image fusion algorithm based on the quaternion wavelet transform (QWT) is proposed in this paper, aiming at solving the image blur problem. The proposed method uses a multiresolution analysis procedure based on the quaternion wavelet transform. The performance of the proposed fusion scheme is assessed by some experiments, and the experimental results show that the proposed method is effective and performs better than the existing fusion methods.",
"title": ""
},
{
"docid": "efeffb457003012eb8db209fe025294c",
"text": "TV white space refers to TV channels that are not used by any licensed services at a particular location and at a particular time. To exploit this unused TVWS spectrum for improved spectrum efficiency, regulatory agencies have begun developing regulations to permit its use this TVWS by unlicensed wireless devices as long as they do not interfere with any licensed services. In the future many heterogeneous, and independently operated, wireless networks may utilize the TVWS. Coexistence between these networks is essential in order to provide a high level of QoS to end users. Consequently, the IEEE 802 LAN/MAN standards committee has approved the P802.19.1 standardization project to specify radio-technology-independent methods for coexistence among dissimilar or independently operated wireless devices and networks. In this article we provide a detailed overview of the regulatory status of TVWS in the United States and Europe, analyze the coexistence problem in TVWS, and summarize existing coexisting mechanisms to improve coexistence in TVWS. The main focus of the article is the IEEE P802.19.1 standardization project, including its requirements and system design, and the major technical challenges ahead.",
"title": ""
},
{
"docid": "07c5026c6311f6a9b19bf49c887b93af",
"text": "Contracts are legally binding descriptions of business service engagements. In particular, we consider business events as elements of a service engagement. Business events such as purchase, delivery, bill payment, and bank interest accrual not only correspond to essential processes but are also inherently temporally constrained. Identifying and understanding the events and their temporal relationships can help a business partner determine what to deliver and what to expect from others as it participates in the service engagement specified by a contract. However, contracts are expressed in unstructured text and their insights are buried therein. Our contributions are threefold. We develop a novel approach employing a hybrid of surface patterns, parsing, and classification to extract 1) business events and 2) their temporal constraints from contract text. We use topic modeling to 3) automatically organize the event terms into clusters. An evaluation on a real-life contract dataset demonstrates the viability and promise of our hybrid approach, yielding an F-measure of 0.89 in event extraction and 0.90 in temporal constraints extraction. The topic model yields event term clusters with an average match of 85 percent between two independent human annotations and an expert-assigned set of class labels for the clusters.",
"title": ""
}
] |
scidocsrr
|
26eaf40e7db6657f443e773a4861eabc
|
Insights into Electrochemical Oxidation of NaO2 in Na-O2 Batteries via Rotating Ring Disk and Spectroscopic Measurements.
|
[
{
"docid": "ea96aa3b9f162c69c738be2b190db9e0",
"text": "Batteries are currently being developed to power an increasingly diverse range of applications, from cars to microchips. How can scientists achieve the performance that each application demands? How will batteries be able to power the many other portable devices that will no doubt be developed in the coming years? And how can batteries become a sustainable technology for the future? The technological revolution of the past few centuries has been fuelled mainly by variations of the combustion reaction, the fire that marked the dawn of humanity. But this has come at a price: the resulting emissions of carbon dioxide have driven global climate change. For the sake of future generations, we urgently need to reconsider how we use energy in everything from barbecues to jet aeroplanes and power stations. If a new energy economy is to emerge, it must be based on a cheap and sustainable energy supply. One of the most flagrantly wasteful activities is travel, and here battery devices can potentially provide a solution, especially as they can be used to store energy from sustainable sources such as the wind and solar power. Because batteries are inherently simple in concept, it is surprising that their development has progressed much more slowly than other areas of electronics. As a result, they are often seen as being the heaviest, costliest and least-green components of any electronic device. It was the lack of good batteries that slowed down the deployment of electric cars and wireless communication, which date from at least 1899 and 1920, respectively (Fig. 1). The slow progress is due to the lack of suitable electrode materials and electrolytes, together with difficulties in mastering the interfaces between them. All batteries are composed of two electrodes connected by an ionically conductive material called an electrolyte. The two electrodes have different chemical potentials, dictated by the chemistry that occurs at each. When these electrodes are connected by means of an external device, electrons spontaneously flow from the more negative to the more positive potential. Ions are transported through the electrolyte, maintaining the charge balance, and electrical energy can be tapped by the external circuit. In secondary, or rechargeable, batteries, a larger voltage applied in the opposite direction can cause the battery to recharge. The amount of electrical energy per mass or volume that a battery can deliver is a function of the cell's voltage and capacity, which are dependent on the …",
"title": ""
}
] |
[
{
"docid": "df15ea13d3bbcb7e9c5658670d37c6b1",
"text": "We present a new time integration method featuring excellent stability and energy conservation properties, making it particularly suitable for real-time physics. The commonly used backward Euler method is stable but introduces artificial damping. Methods such as implicit midpoint do not suffer from artificial damping but are unstable in many common simulation scenarios. We propose an algorithm that blends between the implicit midpoint and forward/backward Euler integrators such that the resulting simulation is stable while introducing only minimal artificial damping. We achieve this by tracking the total energy of the simulated system, taking into account energy-changing events: damping and forcing. To facilitate real-time simulations, we propose a local/global solver, similar to Projective Dynamics, as an alternative to Newton’s method. Compared to the original Projective Dynamics, which is derived from backward Euler, our final method introduces much less numerical damping at the cost of minimal computing overhead. Stability guarantees of our method are derived from the stability of backward Euler, whose stability is a widely accepted empirical fact. However, to our knowledge, theoretical guarantees have so far only been proven for linear ODEs. We provide preliminary theoretical results proving the stability of backward Euler also for certain cases of nonlinear potential functions.",
"title": ""
},
{
"docid": "3e34173e2efec69f021a9e7efa6648cd",
"text": "Multi-robot systems require efficient and accurate planning in order to perform mission-critical tasks. This paper introduces a mixed-integer linear programming solution to coordinate multiple heterogenenous robots for detecting and controlling multiple regions of interest in an unknown environment. The objective function contains four basic requirements of a multi-robot system serving this purpose: control regions of interest, provide communication between robots, control maximum area and detect regions of interest. Our solution defines optimum locations of robots in order to maximize the objective function while efficiently satisfying some constraints such as avoiding obstacles and staying within the speed capabilities of the robots. We implemented and tested our approach under realistic scenarios. We showed various extensions to objective function and constraints to show the flexibility of mixed-integer linear programming formulation. Type of Report: Other Department of Computer Science & Engineering Washington University in St. Louis Campus Box 1045 St. Louis, MO 63130 ph: (314) 935-6160 Mixed-Integer Linear Programming Solution to Multi-Robot Task Allocation Problem Nuzhet Atay Department of Computer Science and Engineering Washington University in St. Louis Email: atay@cse.wustl.edu Burchan Bayazit Department of Computer Science and Engineering Washington University in St. Louis Email: bayazit@cse.wustl.edu Abstract— Multi-robot systems require efficient and accurate planning in order to perform mission-critical tasks. This paper introduces a mixed-integer linear programming solution to coordinate multiple heterogenenous robots for detecting and controlling multiple regions of interest in an unknown environment. The objective function contains four basic requirements of a multi-robot system serving this purpose: control regions of interest, provide communication between robots, control maximum area and detect regions of interest. Our solution defines optimum locations of robots in order to maximize the objective function while efficiently satisfying some constraints such as avoiding obstacles and staying within the speed capabilities of the robots. We implemented and tested our approach under realistic scenarios. We showed various extensions to objective function and constraints to show the flexibility of mixed-integer linear programming formulation. Multi-robot systems require efficient and accurate planning in order to perform mission-critical tasks. This paper introduces a mixed-integer linear programming solution to coordinate multiple heterogenenous robots for detecting and controlling multiple regions of interest in an unknown environment. The objective function contains four basic requirements of a multi-robot system serving this purpose: control regions of interest, provide communication between robots, control maximum area and detect regions of interest. Our solution defines optimum locations of robots in order to maximize the objective function while efficiently satisfying some constraints such as avoiding obstacles and staying within the speed capabilities of the robots. We implemented and tested our approach under realistic scenarios. We showed various extensions to objective function and constraints to show the flexibility of mixed-integer linear programming formulation.",
"title": ""
},
{
"docid": "5c1fc37a7677641f8927ad78cda6deb0",
"text": "Multi-label problems arise in various domains such as multi-topic document categorization and protein function prediction. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. Since the multiple labels share the same input space, and the semantics conveyed by different labels are usually correlated, it is essential to exploit the correlation information contained in different labels. In this paper, we consider a general framework for extracting shared structures in multi-label classification. In this framework, a common subspace is assumed to be shared among multiple labels. We show that the optimal solution to the proposed formulation can be obtained by solving a generalized eigenvalue problem, though the problem is non-convex. For high-dimensional problems, direct computation of the solution is expensive, and we develop an efficient algorithm for this case. One appealing feature of the proposed framework is that it includes several well-known algorithms as special cases, thus elucidating their intrinsic relationships. We have conducted extensive experiments on eleven multi-topic web page categorization tasks, and results demonstrate the effectiveness of the proposed formulation in comparison with several representative algorithms.",
"title": ""
},
{
"docid": "1f5a244d4ef3e6129d14c50fb26bc9eb",
"text": "The authors describe blockchain’s fundamental concepts, provide perspectives on its challenges and opportunities, and trace its origins from the Bitcoin digital cash system to recent applications.",
"title": ""
},
{
"docid": "834a9494e53c95c4e656bde284a453fd",
"text": "Simulation is one of five key technologies that PwC's Artificial Intelligence Accelerator lab uses to build Artificial Intelligence (AI) applications. Application of AI is accelerating rapidly, spawning new sectors, and resulting in unprecedented reach, power, and influence. Simulation explicitly captures the behavior of agents and processes that can either be described by or replaced by AI components. AI components can be embedded into a simulation to provide learning or adaptive behavior. And, simulation can be used to evaluate the impact of introducing AI into a “real world system” such as supply chains or production processes. In this workshop we will demonstrate an Agent-Based Model with Reinforcement Learning for Autonomous Fleet Coordination; demonstrate and describe in detail a version of the AnyLogic Consumer Market Model that has been modified to include adaptive dynamics based on deep learning; and describe approaches to integrating machine learning to the design and development of simulations.",
"title": ""
},
{
"docid": "4a9b4668296561b3522c3c57c64220c1",
"text": "Hyperspectral imagery, which contains hundreds of spectral bands, has the potential to better describe the biological and chemical attributes on the plants than multispectral imagery and has been evaluated in this paper for the purpose of crop yield estimation. The spectrum of each pixel in a hyperspectral image is considered as a linear combinations of the spectra of the vegetation and the bare soil. Recently developed linear unmixing approaches are evaluated in this paper, which automatically extracts the spectra of the vegetation and bare soil from the images. The vegetation abundances are then computed based on the extracted spectra. In order to reduce the influences of this uncertainty and obtain a robust estimation results, the vegetation abundances extracted on two different dates on the same fields are then combined. The experiments are carried on the multidate hyperspectral images taken from two grain sorghum fields. The results show that the correlation coefficients between the vegetation abundances obtained by unsupervised linear unmixing approaches are as good as the results obtained by supervised methods, where the spectra of the vegetation and bare soil are measured in the laboratory. In addition, the combination of vegetation abundances extracted on different dates can improve the correlations (from 0.6 to 0.7).",
"title": ""
},
{
"docid": "26b67fe7ee89c941d313187672b1d514",
"text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.",
"title": ""
},
{
"docid": "72d4f389f791584229409fdb93186d66",
"text": "INTRODUCTION\nThe sphincteric and supportive functions of the pelvic floor are fairly well understood, and pelvic floor rehabilitation, a specialized field within the scope and practice of physical therapy, has demonstrated effectiveness in the treatment of urinary and fecal incontinence. The role of the pelvic floor in the promotion of optimal sexual function has not been clearly elucidated.\n\n\nAIM\nTo review the role of the pelvic floor in the promotion of optimal sexual function and examine the role of pelvic floor rehabilitation in treating sexual dysfunction.\n\n\nMAIN OUTCOME MEASURE\nReview of peer-reviewed literature.\n\n\nRESULTS\nIt has been proposed that the pelvic floor muscles are active in both male and female genital arousal and orgasm, and that pelvic floor muscle hypotonus may impact negatively on these phases of function. Hypertonus of the pelvic floor is a significant component of sexual pain disorders in women and men. Furthermore, conditions related to pelvic floor dysfunction, such as pelvic pain, pelvic organ prolapse, and lower urinary tract symptoms, are correlated with sexual dysfunction.\n\n\nCONCLUSIONS\nThe involvement of the pelvic floor in sexual function and dysfunction is examined, as well as the potential role of pelvic floor rehabilitation in treatment. Further research validating physical therapy intervention is necessary.",
"title": ""
},
{
"docid": "0449eaba0eea843d71008751de4cf452",
"text": "Recent advances in bridging the semantic gap between virtual machines (VMs) and their guest processes have a dark side: They can be abused to subvert and compromise VM file system images and process images. To demonstrate this alarming capability, a context-aware, reactive VM Introspection (VMI) instrument is presented and leveraged to automatically break the authentication mechanisms of both Linux and Windows operating systems. By bridging the semantic gap, the attack is able to automatically identify critical decision points where authentication succeeds or fails at the binary level. It can then leverage the VMI to transparently corrupt the control-flow or data-flow of the victim OS at that point, resulting in successful authentication without any password-guessing or encryption-cracking. The approach is highly flexible (threatening a broad class of authentication implementations), practical (realizable against real-world OSes and VM images), and useful for both malicious attacks and forensics analysis of virtualized systems and software.",
"title": ""
},
{
"docid": "55745523b43b49ef02bf5e7628f7be84",
"text": "A fabrication process for the simultaneous shaping of arrays of glass shells on a wafer level is introduced in this paper. The process is based on etching cavities in silicon, followed by anodic bonding of a thin glass wafer to the etched silicon wafer. The bonded wafers are then heated inside a furnace at a temperature above the softening point of the glass, and due to the expansion of the trapped gas in the silicon cavities the glass is blown into three-dimensional spherical shells. An analytical model which can be used to predict the shape of the glass shells is described and demonstrated to match the experimental data. The ability to blow glass on a wafer level may enable novel capabilities including mass-production of microscopic spherical gas confinement chambers, microlenses, and complex microfluidic networks",
"title": ""
},
{
"docid": "98a3216257c9c2358d2a70247b185cb9",
"text": "Deep Neural Networks (DNNs) have achieved impressive accuracy in many application domains including im-age classification. Training of DNNs is an extremely compute-intensive process and is solved using variants of the stochastic gradient descent (SGD) algorithm. A lot of recent research has focused on improving the performance of DNN training. In this paper, we present optimization techniques to improve the performance of the data parallel synchronous SGD algorithm using the Torch framework: (i) we maintain data in-memory to avoid file I/O overheads, (ii) we propose optimizations to the Torch data parallel table framework that handles multi-threading, and (iii) we present MPI optimization to minimize communication overheads. We evaluate the performance of our optimizations on a Power 8 Minsky cluster with 64 nodes and 256 NVidia Pascal P100 GPUs. With our optimizations, we are able to train 90 epochs of the ResNet-50 model on the Imagenet-1k dataset using 256 GPUs in just 48 minutes. This significantly improves on the previously best known performance of training 90 epochs of the ResNet-50 model on the same dataset using the same number of GPUs in 65 minutes. To the best of our knowledge, this is the best known training performance demonstrated for the Imagenet-1k dataset using 256 GPUs.",
"title": ""
},
{
"docid": "19b602b49f0fcd51f5ec7f240fe26d60",
"text": "Wireless communication by leveraging the use of low-altitude unmanned aerial vehicles (UAVs) has received significant interests recently due to its low-cost and flexibility in providing wireless connectivity in areas without infrastructure coverage. This paper studies a UAV-enabled mobile relaying system, where a high-mobility UAV is deployed to assist in the information transmission from a ground source to a ground destination with their direct link blocked. By assuming that the UAV adopts the energy-efficient circular trajectory and employs time-division duplexing (TDD) based decode-and-forward (DF) relaying, we maximize the spectrum efficiency (SE) in bits/second/Hz as well as energy efficiency (EE) in bits/Joule of the considered system by jointly optimizing the time allocations for the UAV's relaying together with its flying speed and trajectory. It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.",
"title": ""
},
{
"docid": "a6cf86ffa90c74b7d7d3254c7d33685a",
"text": "Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semistructured data as graphs where nodes correspond to primitives (parts, interest points, and segments) and edges characterize the relationships between these primitives. However, these nonvectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of--explicit/implicit--graph vectorization and embedding. This embedding process should be resilient to intraclass graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have a positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.",
"title": ""
},
{
"docid": "a7d3c1a4089d55461f9c74a345883f63",
"text": "Robots that can easily interact with humans and move through natural environments are becoming increasingly essential as assistive devices in the home, office and hospital. These machines need to be safe, effective, and easy to control. One strategy towards accomplishing these goals is to build the robots using soft and flexible materials to make them much more approachable and less likely to damage their environment. A major challenge is that comparatively little is known about how best to design, fabricate and control deformable machines. Here we describe the design, fabrication and control of a novel soft robotic platform (Softworms) as a modular device for research, education and public outreach. These robots are inspired by recent neuromechanical studies of crawling and climbing by larval moths and butterflies (Lepidoptera, caterpillars). Unlike most soft robots currently under development, the Softworms do not rely on pneumatic or fluidic actuators but are electrically powered and actuated using either shape-memory alloy microcoils or motor tendons, and they can be modified to accept other muscle-like actuators such as electroactive polymers. The technology is extremely versatile, and different designs can be quickly and cheaply fabricated by casting elastomeric polymers or by direct 3D printing. Softworms can crawl, inch or roll, and they are steerable and even climb steep inclines. Softworms can be made in any shape but here we describe modular and monolithic designs requiring little assembly. These modules can be combined to make multi-limbed devices. We also describe two approaches for controlling such highly deformable structures using either model-free state transition-reward matrices or distributed, mechanically coupled oscillators. In addition to their value as a research platform, these robots can be developed for use in environmental, medical and space applications where cheap, lightweight and shape-changing deformable robots will provide new performance capabilities.",
"title": ""
},
{
"docid": "01f92f1028201ff5790b4f20ef84618c",
"text": "The need for high-frequency, low-power, wide temperature range, precision on-chip reference clock generation makes relaxation oscillator topology an attractive solution for various automotive applications. This paper presents for the first time a 140MHz relaxation oscillator with robust-against-process-variation temperature compensation scheme. The high-frequency relaxation oscillator achieves 28 ppm/°C frequency stability over the automotive temperature range from −40 to 175°C. The circuit is fabricated in 40nm CMOS technology, occupies 0.009 mm2 and consumes 294µW from 1.2V supply.",
"title": ""
},
{
"docid": "d242ef5126dfb2db12b54c15be61367e",
"text": "RankNet is one of the widely adopted ranking models for web search tasks. However, adapting a generic RankNet for personalized search is little studied. In this paper, we first continue-trained a variety of RankNets with different number of hidden layers and network structures over a previously trained global RankNet model, and observed that a deep neural network with five hidden layers gives the best performance. To further improve the performance of adaptation, we propose a set of novel methods categorized into two groups. In the first group, three methods are proposed to properly assess the usefulness of each adaptation instance and only leverage the most informative instances to adapt a user-specific RankNet model. These assessments are based on KL-divergence, click entropy or a heuristic to ignore top clicks in adaptation queries. In the second group, two methods are proposed to regularize the training of the neural network in RankNet: one of these methods regularize the error back-propagation via a truncated gradient approach, while the other method limits the depth of the back propagation when adapting the neural network. We empirically evaluate our approaches using a large-scale real-world data set. Experimental results exhibit that our methods all give significant improvements over a strong baseline ranking system, and the truncated gradient approach gives the best performance, significantly better than all others.",
"title": ""
},
{
"docid": "3ed8e90ada749d74bc32205ec6f7e819",
"text": "Maintenance of distributed infrastructures requires periodic measurement of many physical variables at numerous locations. This task can potentially be accomplished with autonomous robotic mobile platforms. The challenges in realizing this vision include electromechanical design of the robot itself, integration of sensors able to estimate the physical properties of the infrastructure, and autonomous operation. This paper describes the electromechanical and sensing system design of the autonomous robot for the inspection of electric power cables. The multiprocessor distributed control architecture developed in this project allows real-time operation of multiple sensors and a possibility to switch between an autonomous mode of operation and a remote-controlled one. The diagnostic sensor array includes thermal, visual, dielectric, and acoustic sensors for the measurement of cable status. Laboratory tests demonstrate the ability of integrated sensors to measure parameters of interest with the resolution required by the application. Field tests in the underground cable system demonstrate the ability of the designed platform to travel along the cable, navigate typical obstacles, and communicate with the host computer",
"title": ""
},
{
"docid": "5f9a122e8748d375b8b7bac838829b06",
"text": "We analyzed heart rate variability (HRV) taken by ECG and photoplethysmography (PPG) to assess their agreement. We also analyzed the sensitivity and specificity of PPG to identify subjects with low HRV as an example of its potential use for clinical applications. The HRV parameters: mean heart rate (HR), amplitude, and ratio of heart rate oscillation (E–I difference, E/I ratio), RMSSD, SDNN, and Power LF, were measured during 1-min deep breathing tests (DBT) in 343 individuals, followed by a 5-min short-term HRV (s-HRV), where the HRV parameters: HR, SD1, SD2, SDNN, Stress Index, Power HF, Power LF, Power VLF, and Total Power, were determined as well. Parameters were compared through correlation analysis and agreement analysis by Bland–Altman plots. PPG derived parameters HR and SD2 in s-HRV showed better agreement than SD1, Power HF, and stress index, whereas in DBT HR, E/I ratio and SDNN were superior to Power LF and RMSSD. DBT yielded stronger agreement than s-HRV. A slight overestimation of PPG HRV over HCG HRV was found. HR, Total Power, and SD2 in the s-HRV, HR, Power LF, and SDNN in the DBT showed high sensitivity and specificity to detect individuals with poor HRV. Cutoff percentiles are given for the future development of PPG-based devices. HRV measured by PPG shows good agreement with ECG HRV when appropriate parameters are used, and PPG-based devices can be employed as an easy screening tool to detect individuals with poor HRV, especially in the 1-min DBT test.",
"title": ""
},
{
"docid": "f0c4c1a82eee97d19012421614ee5d5f",
"text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.",
"title": ""
},
{
"docid": "54663fcef476f15e2b5261766a19375b",
"text": "In this study, performances of classification techniques were compared in order to predict the presence of coronary artery disease (CAD). A retrospective analysis was performed in 1245 subjects (865 presence of CAD and 380 absence of CAD). We compared performances of logistic regression (LR), classification and regression tree (CART), multi-layer perceptron (MLP), radial basis function (RBF), and self-organizing feature maps (SOFM). Predictor variables were age, sex, family history of CAD, smoking status, diabetes mellitus, systemic hypertension, hypercholesterolemia, and body mass index (BMI). Performances of classification techniques were compared using ROC curve, Hierarchical Cluster Analysis (HCA), and Multidimensional Scaling (MDS). Areas under the ROC curves are 0.783, 0.753, 0.745, 0.721, and 0.675, respectively for MLP, LR, CART, RBF, and SOFM. MLP was found the best technique to predict presence of CAD in this data set, given its good classificatory performance. MLP, CART, LR, and RBF performed better than SOFM in predicting CAD in according to HCA and MDS. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
915dd26a9b31cbb6fe2ee46b089de8f0
|
The Use of Social Media in the Supply Chain: Survey and Extensions
|
[
{
"docid": "55462ae5eeb747114dfda77d14519557",
"text": "In an environment where supply chains compete against supply chains, information sharing among supply chain partners using information systems is a competitive tool. Supply chain ontology has been proposed as an important medium for attaining information systems interoperability. Ontology has its origin in philosophy, and the computing community has adopted ontology in its language. This paper presents a study of state of the art research in supply chain ontology and identifies the outstanding research gaps. Six supply chain ontology models were identified from a systematic review of literature. A seven point comparison framework was developed to consider the underlying concepts as well as application of the ontology models. The comparison results were then synthesised into nine gaps to inform future supply chain ontology research. This work is a rigorous and systematic attempt to identify and synthesise the research in supply chain ontology. 2010 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "7448defe73a531018b11ac4b4b38b4cb",
"text": "Calcium oxalate crystalluria is a problem of growing concern in dogs. A few reports have discussed acute kidney injury by oxalates in dogs, describing ultrastructural findings in particular. We evaluated the possibility of deposition of calcium oxalate crystals in renal tissue and its probable consequences. Six dogs were intravenously injected with 0.5 M potassium oxalate (KOx) for seven consecutive days. By the end of the experiment, ultrasonography revealed a significant increase in the renal mass and renal parenchymal echogenicity. Serum creatinine and blood urea nitrogen levels were gradually increased. The histopathological features of the kidneys were assessed by both light and electron microscopy, which showed CaOx crystal deposition accompanied by morphological changes in the renal tissue of KOx injected dogs. Canine renal oxalosis provides a good model to study the biological and pathological changes induced upon damage of renal tissue by KOx injection.",
"title": ""
},
{
"docid": "18a985c7960ee6c94f3f8bde503c07ce",
"text": "Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.",
"title": ""
},
{
"docid": "13025bfe1c6c8c7e6489dba627521ae7",
"text": "Over the last twenty years aspectual notions have been increasingly appealed to in structuring verbal lexical semantic representations and, concomitantly, in formulating principles of argument expression. This move has been further fueled by the significant insights that have emerged from this line of research. Yet, despite the enthusiasm for aspectual notions that their proliferation demonstrates, I propose that such notions are not the panacea that their considerable use would suggest. Although I also have adopted them in my work, my continuing research into lexical semantic representation and argument expression has suggested to me that the links between aspect, lexical semantic representation, and argument expression are not so simple and transparent as they are made out to be. I use this study to reassess the contributions of aspect to lexical semantic representation and argument expression. The striking acceptance of aspectual notions as a means of structuring lexical semantic representations may have its roots in some well-known drawbacks of lexical semantic representations that take the form of semantic role lists. As often pointed out, semantic role lists are not grounded in a theory of events, leaving them unconstrained and vulnerable to criticism. Aspectual classifications, proposed at least as early as Aristotle and taken up more recently by Vendler (1957), Kenny (1963), and many others, offer a ready-made theory of the ontological types of events, which grounds them in their temporal contours. Furthermore, aspectual classifications have proved their usefulness in accounts of temporal entailments and temporal adverbial distribution. With this incentive, aspectual classes have been increasingly adopted as the appropriate event types for the twin purposes of structuring lexical semantic representations and formulating a theory of argument expression, as I now review. I then consider how well such attempts succeed. I suggest that it is right to",
"title": ""
},
{
"docid": "eac2100a0fa189aecc148b70e113a0b0",
"text": "Zolt ́n Dörnyei Language Teaching / Volume 31 / Issue 03 / July 1998, pp 117 135 DOI: 10.1017/S026144480001315X, Published online: 12 June 2009 Link to this article: http://journals.cambridge.org/abstract_S026144480001315X How to cite this article: Zolt ́n Dörnyei (1998). Motivation in second and foreign language learning. Language Teaching, 31, pp 117135 doi:10.1017/S026144480001315X Request Permissions : Click here",
"title": ""
},
{
"docid": "af69cdae1b331c012dab38c47e2c786c",
"text": "A 44 μW self-powered power line monitoring sensor node is implemented in 65 nm CMOS. A 450 kHz 30 kbps BPSK-modulated transceiver allows for 1.5-meter node-to-node powerline communication at 10E-6 BER. The node has a 3.354 ENOB 50 kSps SAR ADC for current measurement and a 440 Sps time-to-digital converter capable of measuring temperature from 0-100 °C in 1.12 °C steps. All components operate at a nominal supply voltage of 0.5 V, and are powered by dedicated regulators enabling fine-grained power management.",
"title": ""
},
{
"docid": "537efff0be1e13d69be850a4ac41c309",
"text": "MOTIVATION\nAutomatically quantifying semantic similarity and relatedness between clinical terms is an important aspect of text mining from electronic health records, which are increasingly recognized as valuable sources of phenotypic information for clinical genomics and bioinformatics research. A key obstacle to development of semantic relatedness measures is the limited availability of large quantities of clinical text to researchers and developers outside of major medical centers. Text from general English and biomedical literature are freely available; however, their validity as a substitute for clinical domain to represent semantics of clinical terms remains to be demonstrated.\n\n\nRESULTS\nWe constructed neural network representations of clinical terms found in a publicly available benchmark dataset manually labeled for semantic similarity and relatedness. Similarity and relatedness measures computed from text corpora in three domains (Clinical Notes, PubMed Central articles and Wikipedia) were compared using the benchmark as reference. We found that measures computed from full text of biomedical articles in PubMed Central repository (rho = 0.62 for similarity and 0.58 for relatedness) are on par with measures computed from clinical reports (rho = 0.60 for similarity and 0.57 for relatedness). We also evaluated the use of neural network based relatedness measures for query expansion in a clinical document retrieval task and a biomedical term word sense disambiguation task. We found that, with some limitations, biomedical articles may be used in lieu of clinical reports to represent the semantics of clinical terms and that distributional semantic methods are useful for clinical and biomedical natural language processing applications.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe software and reference standards used in this study to evaluate semantic similarity and relatedness measures are publicly available as detailed in the article.\n\n\nCONTACT\npakh0002@umn.eduSupplementary information: Supplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "c1ca7ef76472258c6359111dd4d014d5",
"text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.",
"title": ""
},
{
"docid": "cee4018679662d7e2aeaefa624e52a77",
"text": "While video games have traditionally been considered simple entertainment devices, nowadays they occupy a privileged position in the leisure and entertainment market, representing the fastest-growing industry globally. We regard the video game as a special type of interactive system whose principal aim is to provide the player with fun and entertainment. In this paper we will analyse how, in Video Games context, Usability alone is not sufficient to achieve the optimum Player Experience. It needs broadening and deepening, to embrace further attributes and properties that identify and describe the Player Experience. We present our proposed means of defining Playability. We also introduce the notion of Facets of Playability. Each facet will allow us to characterize the Playability easily, and associate them with the different elements of a video game. To guarantee the optimal Player Experience, Playability needs to be assessed throughout the entire video game development process, taking a Player-Centred Video Game Design approach.",
"title": ""
},
{
"docid": "83dd0cd815c79932e6ff8b1faf780ef2",
"text": "Pattern recognition and registration are integral elements of computer vision, which considers image patterns. This thesis presents novel blur, and combined blur and geometric invariant features for pattern recognition and registration related to images. These global or local features are based on the Fourier transform phase, and are invariant or insensitive to image blurring with a centrally symmetric point spread function which can result, for example, from linear motion or out of focus. The global features are based on the even powers of the phase-only discrete Fourier spectrum or bispectrum of an image and are invariant to centrally symmetric blur. These global features are used for object recognition and image registration. The features are extended for geometrical invariances up to similarity transformation: shift invariance is obtained using bispectrum, and rotation-scale invariance using log-polar mapping of bispectrum slices. Affine invariance can be achieved as well using rotated sets of the log-log mapped bispectrum slices. The novel invariants are shown to be more robust to additive noise than the earlier blur, and combined blur and geometric invariants based on image moments. The local features are computed using the short term Fourier transform in local windows around the points of interest. Only the lowest horizontal, vertical, and diagonal frequency coefficients are used, the phase of which is insensitive to centrally symmetric blur. The phases of these four frequency coefficients are quantized and used to form a descriptor code for the local region. When these local descriptors are used for texture classification, they are computed for every pixel, and added up to a histogram which describes the local pattern. There are no earlier textures features which have been claimed to be invariant to blur. The proposed descriptors were superior in the classification of blurred textures compared to a few non-blur invariant state of the art texture classification methods.",
"title": ""
},
{
"docid": "beb7509b59f1bac8083ce5fbddb247e5",
"text": "Congestion in the Industrial, Scientific, and Medical (ISM) frequency band limits the expansion of the IEEE 802.11 Wireless Local Area Network (WLAN). Recently, due to the ‘digital switchover’ from analog to digital TV (DTV) broadcasting, a sizeable amount of bandwidth have been freed up in the conventional TV bands, resulting in the availability of TV white space (TVWS). The IEEE 802.11af is a standard for the WLAN technology that operates at the TVWS spectrum. TVWS operation must not cause harmful interference to the incumbent DTV service. This paper provides a method of computing the keep-out distance required between an IEEE 802.11af device and the DTV service contour, in order to keep the interference to a harmless level. The ITU-R P.1411-7 propagation model is used in the calculation. Four different DTV services are considered: Advanced Television Systems Committee (ATSC), Digital Video Broadcasting — Terrestrial (DVB-T), Integrated Services Digital Broadcasting — Terrestrial (ISDB-T), and Digital Terrestrial Multimedia Broadcasting (DTMB). The calculation results reveal that under many circumstances, allocating keep-out distance of 1 to 2.5 km is sufficient for the protection of DTV service.",
"title": ""
},
{
"docid": "6087be6cef33af7d8fbfa55c8125bdb7",
"text": "Support Vector Machines (SVM) are the classifiers which were originally designed for binary classification. The classification applications can solve multi-class problems. Decision-tree-based support vector machine which combines support vector machines and decision tree can be an effective way for solving multi-class problems in Intrusion Detection Systems (IDS). This method can decrease the training and testing time of the IDS, increasing the efficiency of the system. The different ways to construct the binary trees divides the data set into two subsets from root to the leaf until every subset consists of only one class. The construction order of binary tree has great influence on the classification performance. In this paper we are studying two decision tree approaches: Hierarchical multiclass SVM and Tree structured multiclass SVM, to construct multiclass intrusion detection system.",
"title": ""
},
{
"docid": "31e75f77ce2bdefe63d350e8c476016b",
"text": "We present DeepVesselNet, an architecture tailored to the challenges to be addressed when extracting vessel networks and corresponding features in 3-D angiographic volume using deep learning. We discuss the problems of low execution speed and high memory requirements associated with full 3-D convolutional networks, high class imbalance arising from low percentage (less than 3%) of vessel voxels, and unavailability of accurately annotated training data and offer solutions that are the building blocks of DeepVesselNet. First, we formulate 2-D orthogonal cross-hair filters which make use of 3-D context information at a reduced computational burden. Second, we introduce a class balancing cross-entropy score with false positive rate correction to handle the high class imbalance and high false positive rate problems associated with existing loss functions. Finally, we generate synthetic dataset using a computational angiogenesis model, capable of generating vascular networks under physiological constraints on local network structure and topology, and use these data for transfer learning. DeepVesselNet is optimized for segmenting and analyzing vessels, and we test the performance on a range of angiographic volumes including clinical Time-of-Flight MRA data of the human brain, as well as synchrotron radiation X-ray tomographic microscopy scans of the rat brain. Our experiments show that, by replacing 3-D filters with 2-D orthogonal cross-hair filters in our network, we achieve over 23% improvement in speed, lower memory footprint, lower network complexity which prevents over fitting and comparable and even sometimes higher accuracy. Our class balancing metric is crucial for training the network and pre-training with synthetic data helps in early convergence of the training process.",
"title": ""
},
{
"docid": "0452cba63dfe7a89cc3cb5802fcfdd3e",
"text": "We show efficient algorithms for edge-coloring planar graphs. Our main result is a linear-time algorithm for coloring planar graphs with maximum degree Δ with max {Δ,9} colors. Thus the coloring is optimal for graphs with maximum degree Δ≥9. Moreover for Δ=4,5,6 we give linear-time algorithms that use Δ+2 colors. These results improve over the algorithms of Chrobak and Yung (J. Algorithms 10:35–51, 1989) and of Chrobak and Nishizeki (J. Algorithms 11:102–116, 1990) which color planar graphs using max {Δ,19} colors in linear time or using max {Δ,9} colors in $\\mathcal{O}(n\\log n)$ time.",
"title": ""
},
{
"docid": "57a333a88a5c1f076fd096ec4cde4cba",
"text": "2.1 HISTORY OF BIOTECHNOLOGY....................................................................................................6 2.2 MODERN BIOTECHNOLOGY ........................................................................................................6 2.3 THE GM DEBATE........................................................................................................................7 2.4 APPLYING THE PRECAUTIONARY APPROACH TO GMOS .............................................................8 2.5 RISK ASSESSMENT ISSUES ..........................................................................................................9 2.6 LEGAL CONTEXT ......................................................................................................................10 T",
"title": ""
},
{
"docid": "f1c2af06078b6b5c802d773a72fc22ad",
"text": "Virtual environments have the potential to become important new research tools in environment behavior research. They could even become the future (virtual) laboratories, if reactions of people to virtual environments are similar to those in real environments. The present study is an exploration of the comparability of research findings in real and virtual environments. In the study, 101 participants explored an identical space, either in reality or in a computer-simulated environment. Additionally, the presence of plants in the space was manipulated, resulting in a 2 (environment) 2 (plants) between-subjects design. Employing a broad set of measurements, we found mixed results. Performances on size estimations and a cognitive mapping task were significantly better in the real environment. Factor analyses of bipolar adjectives indicated that, although four dimensions were similar for both environments, a fifth dimension of environmental assessmenttermedarousalwas absent in the virtual environment. In addition, we found significant differences on the scores of four of the scales. However, no significant interactions appeared between environment and plants. Experience of and behavior in virtual environments have similarities to that in real environments, but there are important differences as well. We conclude that this is not only a necessary, but also a very interesting research subject for environmental psychology.",
"title": ""
},
{
"docid": "e4e87d127f4cacf471d1dbf8788f8548",
"text": "We propose a method for computer-based speed writing, SHARK (shorthand aided rapid keyboarding), which augments stylus keyboarding with shorthand gesturing. SHARK defines a shorthand symbol for each word according to its movement pattern on an optimized stylus keyboard. The key principles for the SHARK design include high efficiency stemmed from layout optimization, duality of gesturing and stylus tapping, scale and location independent writing, Zipf's law, and skill transfer from tapping to shorthand writing due to pattern consistency. We developed a SHARK system based on a classic handwriting recognition algorithm. A user study demonstrated the feasibility of the SHARK method.",
"title": ""
},
{
"docid": "05d3029a38631e4c0e445731f655b52c",
"text": "This paper presents a non-inverting buck-boost based power-factor-correction (PFC) converter operating in the boundary-conduction-mode (BCM) for the wide input-voltage-range applications. Unlike other conventional PFC converters, the proposed non-inverting buck-boost based PFC converter has both step-up and step-down conversion functionalities to provide positive DC output-voltage. In order to reduce the turn-on switching-loss in high frequency applications, the BCM current control is employed to achieve zero current turn-on for the power switches. Besides, the relationships of the power factor versus the voltage conversion ratio between the BCM boost PFC converter and the proposed BCM non-inverting buck-boost PFC converter are also provided. Finally, the 70-watt prototype circuit of the proposed BCM buck-boost based PFC converter is built for the verification of the high frequency and wide input-voltage-range.",
"title": ""
},
{
"docid": "080645f82f7c0308ad73e18e7d42ecb6",
"text": "We propose an end-to-end neural network that improves the segmentation accuracy of fully convolutional networks by incorporating a localization unit. This network performs object localization first, which is then used as a cue to guide the training of the segmentation network. We test the proposed method on a segmentation task of small objects on a clinical dataset of ultrasound images. We show that by jointly learning for detection and segmentation, the proposed network is able to improve the segmentation accuracy compared to only learning for segmentation.",
"title": ""
},
{
"docid": "061c67c967818b1a0ad8da55345c6dcf",
"text": "The paper aims at revealing the essence and connotation of Computational Thinking. It analyzed some of the international academia’s research results of Computational Thinking. The author thinks Computational Thinking is discipline thinking or computing philosophy, and it is very critical to understand Computational Thinking to grasp the thinking’ s computational features and the computing’s thinking attributes. He presents the basic rules of screening the representative terms of Computational Thinking and lists some representative terms based on the rules. He thinks Computational Thinking is contained in the commonalities of those terms. The typical thoughts of Computational Thinking are structuralization, formalization, association-and-interaction, optimization and reuse-and-sharing. Training Computational Thinking must base on the representative terms and the typical thoughts. There are three innovations in the paper: the five rules of screening the representative terms, the five typical thoughts and the formalized description of Computational Thinking.",
"title": ""
},
{
"docid": "6831c633bf7359b8d22296b52a9a60b8",
"text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.",
"title": ""
}
] |
scidocsrr
|
a038581165d0bd23fc05fbe8c4705089
|
Free-Standing Leaping Experiments with a Power-Autonomous , Elastic-Spined Quadruped
|
[
{
"docid": "9c57ace27f5b121b1031ccbd100907c4",
"text": "High speed legged locomotion involves high acceleration and extensive loadings of the leg, which impose critical challenges in actuator design. We introduce actuator dimensional analysis for maximizing torque density and transmission `transparency'. A front leg prototype developed based on insight from the analysis is evaluated for direct proprioceptive force control without force sensors. The vertical stiffness controlled leg was tested on a material testing device to calibrate the mechanical impedance of the leg. By compensating transmission impedance from commanded torque, the leg was able to estimate impact force. For the impact test, the mean absolute error as a ratio of full scale sensor force is 0.041 in the 3406 N/m stiffness experiment and is 0.049 in the 5038 N/m experiment. The results indicate that prescribed force profile control is possible during high speed locomotion.",
"title": ""
}
] |
[
{
"docid": "b1dc8163cdcaefcf313d6a6155922ad6",
"text": "Light Detection and Ranging (LiDAR) is an active sensor that can effectively acquire a large number of three-dimensional (3-D) points. LiDAR systems can be equipped on different platforms for different applications, but to integrate the data, point cloud registration is needed to improve geometric consistency. The registration of airborne and terrestrial mobile LiDAR is a challenging task because the point densities and scanning directions differ. We proposed a scheme for the registration of airborne and terrestrial mobile LiDAR using the least squares 3-D surface registration technique to minimize the surfaces between two datasets. To analyze the effect of point density in registration, the simulation data simulated different conditions and estimated the theoretical errors. The test data were the point clouds of the airborne LiDAR system (ALS) and the mobile LiDAR system (MLS), which were acquired by Optech ALTM 3070 and Lynx, respectively. The resulting simulation analysis indicated that the accuracy of registration improved as the density increased. For the test dataset, the registration error of mobile LiDAR between different trajectories improved from 40 cm to 4 cm, and the registration error between ALS and MLS improved from 84 cm to 4 cm. These results indicate that the proposed methods can obtain 5 cm accuracy between ALS and MLS.",
"title": ""
},
{
"docid": "d90a66cf63abdc1d0caed64812de7043",
"text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.",
"title": ""
},
{
"docid": "7ed1727c0f01bc8f37d37ea7e4e6a861",
"text": "CONTEXT\nHistopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability.\n\n\nAIMS\nWe built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis.\n\n\nSETTING AND DESIGN\nOur method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks.\n\n\nSUBJECTS AND METHODS\nOur method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards.\n\n\nSTATISTICAL ANALYSIS\nWe evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals.\n\n\nRESULTS\nOur evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%-95.9%).\n\n\nCONCLUSIONS\nOur method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations.",
"title": ""
},
{
"docid": "deabd38990de9ed15958bb2ad28d225e",
"text": "Recent IoT-based DDoS attacks have exposed how vulnerable the Internet can be to millions of insufficiently secured IoT devices. To understand the risks of these attacks requires learning about these IoT devices---where are they, how many are there, how are they changing? In this paper, we propose a new method to find IoT devices in Internet to begin to assess this threat. Our approach requires observations of flow-level network traffic and knowledge of servers run by the manufacturers of the IoT devices. We have developed our approach with 10 device models by 7 vendors and controlled experiments. We apply our algorithm to observations from 6 days of Internet traffic at a college campus and partial traffic from an IXP to detect IoT devices.",
"title": ""
},
{
"docid": "c682e6d5423987dea072625e626a2146",
"text": "In this work, a solution for clustering and tracking obstacles in the area covered by a LIDAR sensor is presented. It is based on a combination of simple artificial intelligence techniques and it is conceived as an initial version of a detection and tracking system for objects of any shape that an autonomous vehicle might find in its surroundings. The proposed solution divides the problem into three consecutive phases: 1) segmentation, 2) fragmentation detection and clustering and 3) tracking. The work done has been tested with real world LIDAR scan samples taken from an instrumented vehicle.",
"title": ""
},
{
"docid": "1be4284ecc83855ecb2fee27dd8b12ac",
"text": "This paper describes a new strategy for real-time cooperative localization of autonomous vehicles. The strategy aims to improve the vehicles localization accuracy and reduce the impact of computing time of multi-sensor data fusion algorithms and vehicle-to-vehicle communication on parallel architectures. The method aims to solve localization issues in a cluster of autonomous vehicles, equipped with low-cost navigation systems in an unknown environment. It stands on multiple forms of the Kalman filter derivatives to estimate the vehicles' nonlinear model vector state, named local fusion node. The vehicles exchange their local state estimate and Covariance Intersection algorithm for merging the local vehicles' state estimate in the second node (named global data fusion node). This strategy simultaneously exploits the proprioceptive and sensors -a Global Positioning System, and a vehicle-to-vehicle transmitter and receiver- and an exteroceptive sensor, range finder, to sense their surroundings for more accurate and reliable collaborative localization.",
"title": ""
},
{
"docid": "9f76d132f413bb5dd8d650ae88752b40",
"text": "This paper presents a localization system for objects tagged with UHF-RFID passive tags, where the reader is attached to a drone. The system implements the phase-based SARFID technique to locate static tags with respect to a UHF-RFID reader attached to a commercial drone. The reader antenna trajectory is achieved through a Global Positioning System. The bi-dimensional tag position is estimated with centimeter order accuracy. Only one reader antenna is required, without any reference tag.",
"title": ""
},
{
"docid": "a7bd8b02d7a46e6b96223122f673a222",
"text": "This study was conducted to identify the risk factors that are associated with neonatal mortality in lambs and kids in Jordan. The bacterial causes of mortality in lambs and kids were investigated. One hundred sheep and goat flocks were selected randomly from different areas of North Jordan at the beginning of the lambing season. The flocks were visited every other week to collect information and to take samples from freshly dead animals. By the end of the lambing season, flocks that had neonatal mortality rate ≥ 1.0% were considered as “case group” while flocks that had neonatal mortality rate less than 1.0% − as “control group”. The results indicated that neonatal mortality rate (within 4 weeks of age), in lambs and kids, was 3.2%. However, the early neonatal mortality rate (within 48 hours of age) was 2.01% and represented 62.1% of the neonatal mortalities. The following risk factors were found to be associated with the neonatal mortality in lambs and kids: not separating the neonates from adult animals; not vaccinating dams against infectious diseases (pasteurellosis, colibacillosis and enterotoxemia); walking more than 5 km and starvation-mismothering exposure. The causes of neonatal mortality in lambs and kids were: diarrhea (59.75%), respiratory diseases (13.3%), unknown causes (12.34%), and accident (8.39%). Bacteria responsible for neonatal mortality were: Escherichia coli, Pasteurella multocida, Clostridium perfringens and Staphylococcus aureus. However, E. coli was the most frequent bacterial species identified as cause of neonatal mortality in lambs and kids and represented 63.4% of all bacterial isolates. The E. coli isolates belonged to 10 serogroups, the O44 and O26 being the most frequent isolates.",
"title": ""
},
{
"docid": "fc3aeb32f617f7a186d41d56b559a2aa",
"text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.",
"title": ""
},
{
"docid": "5e00131c72f013bbbc6ac5fb7ee1e50f",
"text": "We propose a probabilistic video model, the Video Pixel Network (VPN), that estimates the discrete joint distribution of the raw pixel values in a video. The model and the neural architecture reflect the time, space and color structure of video tensors and encode it as a fourdimensional dependency chain. The VPN approaches the best possible performance on the Moving MNIST benchmark, a leap over the previous state of the art, and the generated videos show only minor deviations from the ground truth. The VPN also produces detailed samples on the action-conditional Robotic Pushing benchmark and generalizes to the motion of novel objects.",
"title": ""
},
{
"docid": "acf5aa5b8e5cd2b28f71522cabe45c26",
"text": "Now a days, there are a number of techniques which are purposefully used and are being build up for well management of garbage or solid waste . Zigbee and Global System for Mobile Communication (GSM) are the latest trends and are one of the best combination to be used in the project. Hence,a combination of both of these technologies is used in the project . To give a brief description of the project , the sensors are placed in the common garbage bins placed at the public places. When the garbage reaches the level of the sensor, then that indication will be given to ARM 7 Controller. The controller will give indication to the driver of garbage collection truck as to which garbage bin is completely filled and needs urgent attention. ARM 7 will give indication by sending SMS using GSM technology.",
"title": ""
},
{
"docid": "22654d2ed4c921c7bceb22ce9f9dc892",
"text": "xv",
"title": ""
},
{
"docid": "746c1feda23b8d685e9908001d8df0ab",
"text": "Breast cancer is one of the leading causes of cancer death among women worldwide. The proposed approach comprises three steps as follows. Firstly, the image is preprocessed to remove speckle noise while preserving important features of the image. Three methods are investigated, i.e., Frost Filter, Detail Preserving Anisotropic Diffusion, and Probabilistic Patch-Based Filter. Secondly, Normalized Cut or Quick Shift is used to provide an initial segmentation map for breast lesions. Thirdly, a postprocessing step is proposed to select the correct region from a set of candidate regions. This approach is implemented on a dataset containing 20 B-mode ultrasound images, acquired from UDIAT Diagnostic Center of Sabadell, Spain. The overall system performance is determined against the ground truth images. The best system performance is achieved through the following combinations: Frost Filter with Quick Shift, Detail Preserving Anisotropic Diffusion with Normalized Cut and Probabilistic Patch-Based with Normalized Cut.",
"title": ""
},
{
"docid": "b81b88841b781e67894ec067ae094b00",
"text": "The testing effect, or the finding that taking an initial test improves subsequent memory performance, is a robust and reliable phenomenon--as long as the final test involves recall. Few studies have examined the effects of taking an initial recall test on final recognition performance, and results from these studies are equivocal. In 3 experiments, we attempt to demonstrate that initial testing can change the ways in which later recognition decisions are executed even when no difference can be detected in the recognition hit rates. Specifically, initial testing was shown to enhance later recollection but leave familiarity unchanged. This conclusion emerged from three dependent measures: source memory, exclusion performance, and remember/know judgments.",
"title": ""
},
{
"docid": "55e977381cf25444be499ec0c320cef9",
"text": "Embedding network data into a low-dimensional vector space has shown promising performance for many real-world applications, such as node classification and entity retrieval. However, most existing methods focused only on leveraging network structure. For social networks, besides the network structure, there also exists rich information about social actors, such as user profiles of friendship networks and textual content of citation networks. These rich attribute information of social actors reveal the homophily effect, exerting huge impacts on the formation of social networks. In this paper, we explore the rich evidence source of attributes in social networks to improve network embedding. We propose a generic Attributed Social Network Embedding framework (ASNE), which learns representations for social actors (i.e., nodes) by preserving both the structural proximity and attribute proximity. While the structural proximity captures the global network structure, the attribute proximity accounts for the homophily effect. To justify our proposal, we conduct extensive experiments on four real-world social networks. Compared to the state-of-the-art network embedding approaches, ASNE can learn more informative representations, achieving substantial gains on the tasks of link prediction and node classification. Specifically, ASNE significantly outperforms node2vec with an 8.2 percent relative improvement on the link prediction task, and a 12.7 percent gain on the node classification task.",
"title": ""
},
{
"docid": "32373f4f2852531c02026ffe35dd8729",
"text": "VSL#3 probiotics can be effective on induction and maintenance of the remission of clinical ulcerative colitis. However, the mechanisms are not fully understood. The aim of this study was to examine the effects of VSL#3 probiotics on dextran sulfate sodium (DSS)-induced colitis in rats. Acute colitis was induced by administration of DSS 3.5 % for 7 days in rats. Rats in two groups were treated with either 15 mg VSL#3 or placebo via gastric tube once daily after induction of colitis; rats in other two groups were treated with either the wortmannin (1 mg/kg) via intraperitoneal injection or the wortmannin + VSL#3 after induction of colitis. Anti-inflammatory activity was assessed by myeloperoxidase (MPO) activity. Expression of inflammatory related mediators (iNOS, COX-2, NF-κB, Akt, and p-Akt) and cytokines (TNF-α, IL-6, and IL-10) in colonic tissue were assessed. TNF-α, IL-6, and IL-10 serum levels were also measured. Our results demonstrated that VSL#3 and wortmannin have anti-inflammatory properties by the reduced disease activity index and MPO activity. In addition, administration of VSL#3 and wortmannin for 7 days resulted in a decrease of iNOS, COX-2, NF-κB, TNF-α, IL-6, and p-Akt and an increase of IL-10 expression in colonic tissue. At the same time, administration of VSL#3 and wortmannin resulted in a decrease of TNF-α and IL-6 and an increase of IL-10 serum levels. VSL#3 probiotics therapy exerts the anti-inflammatory activity in rat model of DSS-induced colitis by inhibiting PI3K/Akt and NF-κB pathway.",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
},
{
"docid": "089573eaa8c1ad8c7ad244a8ccca4049",
"text": "We consider the problem of assigning an input vector to one of m classes by predicting P(c|x) for c = 1, o, m. For a twoclass problem, the probability of class one given x is estimated by s(y(x)), where s(y) = 1/(1 + ey ). A Gaussian process prior is placed on y(x), and is combined with the training data to obtain predictions for new x points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace’s approximation. The method is generalized to multiclass problems (m > 2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets.",
"title": ""
},
{
"docid": "103f432e237567c2954490e8ef257fe7",
"text": "Pierre Bourdieu holds the Chair in Sociology at the prestigious College de France, Paris. He is Directeur d'Etudes at l'Ecole des Hautes Etudes en Sciences Sociales, where he is also Director of the Center for European Sociology, and Editor of the influential journal Actes de la recherche en sciences sociales. Professor Bourdieu is the author or coauthor of approximately twenty books. A number of these have been published in English translation: The Algerians, 1962; Reproduction in Education, Society and Culture (with Jean-Claude Passeron), 1977; Outline of a Theory of Practice, 1977; Algeria I960, 1979; The Inheritors: French Students and their Relations to Culture, 1979; Distinction: A Social Critique of the Judgment of Taste, 1984. The essay below analyzes what Bourdieu terms the \"juridical field.\" In Bourdieu's conception, a \"field\" is an area of structured, socially patterned activity or \"practice,\" in this case disciplinarily and professionally defined. The \"field\" and its \"practices\" have special senses in",
"title": ""
}
] |
scidocsrr
|
aa2a216e2ccc2390b042fbb4895a5645
|
Static Analysis of Android Programs
|
[
{
"docid": "a8cb644c1a7670670299d33c1e1e53d3",
"text": "In Java, C or C++, attempts to dereference the null value result in an exception or a segmentation fault. Hence, it is important to identify those program points where this undesired behaviour might occur or prove the other program points (and possibly the entire program) safe. To that purpose, null-pointer analysis of computer programs checks or infers non-null annotations for variables and object fields. With few notable exceptions, null-pointer analyses currently use run-time checks or are incorrect or only verify manually provided annotations. In this paper, we use abstract interpretation to build and prove correct a first, flow and context-sensitive static null-pointer analysis for Java bytecode (and hence Java) which infers non-null annotations. It is based on Boolean formulas, implemented with binary decision diagrams. For better precision, it identifies instance or static fields that remain always non-null after being initialised. Our experiments show this analysis faster and more precise than the correct null-pointer analysis by Hubert, Jensen and Pichardie. Moreover, our analysis deals with exceptions, which is not the case of most others; its formulation is theoretically clean and its implementation strong and scalable. We subsequently improve that analysis by using local reasoning about fields that are not always non-null, but happen to hold a non-null value when they are accessed. This is a frequent situation, since programmers typically check a field for non-nullness before its access. We conclude with an example of use of our analyses to infer null-pointer annotations which are more precise than those that other inference tools can achieve.",
"title": ""
}
] |
[
{
"docid": "6c12755ba2580d5d9b794b9a33c0304a",
"text": "A fundamental part of conducting cross-disciplinary web science research is having useful, high-quality datasets that provide value to studies across disciplines. In this paper, we introduce a large, hand-coded corpus of online harassment data. A team of researchers collaboratively developed a codebook using grounded theory and labeled 35,000 tweets. Our resulting dataset has roughly 15% positive harassment examples and 85% negative examples. This data is useful for training machine learning models, identifying textual and linguistic features of online harassment, and for studying the nature of harassing comments and the culture of trolling.",
"title": ""
},
{
"docid": "6677149025a415e44778d1011b617c36",
"text": "In this paper controller synthesis based on standard and dynamic sliding modes for an uncertain nonlinear MIMO Three tank System is presented. Two types of sliding mode controllers are synthesized; first controller is based on standard first order sliding modes while second controller uses dynamic sliding modes. Sliding manifolds for both controllers are designed in-order to ensure finite time convergence of sliding variable for tracking the desired system trajectories. Simulation results are presented showing the performance analysis of both sliding mode controllers. Simulations are also carried out to assess the performance of dynamic sliding mode controller against parametric uncertainties / disturbances. A comparison of designed sliding mode controllers with LMI based robust H∞ controller is also discussed. The performance of dynamic sliding mode control in terms of response time, control effort and robustness of dynamic sliding mode controller is shown to be better than standard sliding mode controller and H∞ controllers.",
"title": ""
},
{
"docid": "b18ecc94c1f42567b181c49090b03d8a",
"text": "We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject’s potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "ced0fc1355a25aba36288d7c0a830240",
"text": "Working memory acts as a key bridge between perception, long-term memory, and action. The brain regions, connections, and neurotransmitters that underlie working memory undergo dramatic plastic changes during the life span, and in response to injury. Early life reliance on deep gray matter structures fades during adolescence as increasing reliance on prefrontal and parietal cortex accompanies the development of executive aspects of working memory. The rise and fall of working memory capacity and executive functions parallels the development and loss of neurotransmitter function in frontal cortical areas. Of the affected neurotransmitters, dopamine and acetylcholine modulate excitatory-inhibitory circuits that underlie working memory, are important for plasticity in the system, and are affected following preterm birth and adult brain injury. Pharmacological interventions to promote recovery of working memory abilities have had limited success, but hold promise if used in combination with behavioral training and brain stimulation. The intense study of working memory in a range of species, ages and following injuries has led to better understanding of the intrinsic plasticity mechanisms in the working memory system. The challenge now is to guide these mechanisms to better improve or restore working memory function.",
"title": ""
},
{
"docid": "3e44a5c966afbeabff11b54bafcefdce",
"text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.",
"title": ""
},
{
"docid": "f5f70dca677752bcaa39db59988c088e",
"text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children",
"title": ""
},
{
"docid": "b74818aca22974927fdcdcbf60ce239b",
"text": "We are currently observing a significant increase in the popularity of Unmanned Aerial Vehicles (UAVs), popularly also known by their generic term drones. This is not only the case for recreational UAVs, that one can acquire for a few hundred dollars, but also for more sophisticated ones, namely professional UAVs, whereby the cost can reach several thousands of dollars. These professional UAVs are known to be largely employed in sensitive missions such as monitoring of critical infrastructures and operations by the police force. Given these applications, and in contrast to what we have been seeing for the case of recreational UAVs, one might assume that professional UAVs are strongly resilient to security threats. In this demo we prove such an assumption wrong by presenting the security gaps of a professional UAV, which is used for critical operations by police forces around the world. We demonstrate how one can exploit the identified security vulnerabilities, perform a Man-in-the-Middle attack, and inject control commands to interact with the compromised UAV. In addition, we discuss appropriate countermeasures to help improving the security and resilience of professional UAVs.",
"title": ""
},
{
"docid": "a39f11e64ba8347b212b7e34fa434f32",
"text": "This paper proposes a fully distributed multiagent-based reinforcement learning method for optimal reactive power dispatch. According to the method, two agents communicate with each other only if their corresponding buses are electrically coupled. The global rewards that are required for learning are obtained with a consensus-based global information discovery algorithm, which has been demonstrated to be efficient and reliable. Based on the discovered global rewards, a distributed Q-learning algorithm is implemented to minimize the active power loss while satisfying operational constraints. The proposed method does not require accurate system model and can learn from scratch. Simulation studies with power systems of different sizes show that the method is very computationally efficient and able to provide near-optimal solutions. It can be observed that prior knowledge can significantly speed up the learning process and decrease the occurrences of undesirable disturbances. The proposed method has good potential for online implementation.",
"title": ""
},
{
"docid": "e9aea5919d3d38184fc13c10f1751293",
"text": "The distinct protein aggregates that are found in Alzheimer's, Parkinson's, Huntington's and prion diseases seem to cause these disorders. Small intermediates — soluble oligomers — in the aggregation process can confer synaptic dysfunction, whereas large, insoluble deposits might function as reservoirs of the bioactive oligomers. These emerging concepts are exemplified by Alzheimer's disease, in which amyloid β-protein oligomers adversely affect synaptic structure and plasticity. Findings in other neurodegenerative diseases indicate that a broadly similar process of neuronal dysfunction is induced by diffusible oligomers of misfolded proteins.",
"title": ""
},
{
"docid": "a98fbce4061085dda4d1cf4648d04f08",
"text": "We estimate mate preferences using a novel data set from an online dating service. The data set contains detailed information on user attributes and the decision to contact a potential mate after viewing his or her profile. This decision provides the basis for our preference estimation approach. A potential problem arises if the site users strategically shade their true preferences. We provide a simple test and a bias correction method for strategic behavior. The main findings are (i) There is no evidence for strategic behavior. (ii) Men and women have a strong preference for similarity along many (but not all) attributes. (iii) In particular, the site users display strong same-race preferences. Race preferences do not differ across users with different age, income, or education levels in the case of women, and differ only slightly in the case of men. For men, but not for women, the revealed same-race preferences correspond to the same-race preference stated in the users’ profile. (iv) There are gender differences in mate preferences; in particular, women have a stronger preference than men for income over physical attributes. ∗Note that previous versions of this paper (“What Makes You Click? – Mate Preferences and Matching Outcomes in Online Dating”) were circulated between 2004 and 2006. Any previously reported results not contained in this paper or in the companion piece Hitsch et al. (2010) did not prove to be robust and were dropped from the final paper versions. We thank Babur De los Santos, Chris Olivola, Tim Miller, and David Wood for their excellent research assistance. We are grateful to Elizabeth Bruch, Jean-Pierre Dubé, Eli Finkel, Emir Kamenica, Derek Neal, Peter Rossi, Betsey Stevenson, and Utku Ünver for comments and suggestions. Seminar participants at the 2006 AEA meetings, Boston College, the Caltech 2008 Matching Conference, the Choice Symposium in Estes Park, the Conference on Marriage and Matching at New York University 2006, the ELSE Laboratory Experiments and the Field (LEaF) Conference, Northwestern University, the 2007 SESP Preconference in Chicago, SITE 2007, the University of Pennsylvania, the 2004 QME Conference, UC Berkeley, UCLA, the University of Chicago, UCL, the University of Naples Federico II, the University of Toronto, Stanford GSB, and Yale University provided valuable comments. This research was supported by the Kilts Center of Marketing (Hitsch), a John M. Olin Junior Faculty Fellowship, and the National Science Foundation, SES-0449625 (Hortaçsu). Please address all correspondence to Hitsch (guenter.hitsch@chicagobooth.edu), Hortaçsu (hortacsu@uchicago.edu), or Ariely (dandan@duke.edu).",
"title": ""
},
{
"docid": "af928cd35b6b33ce1cddbf566f63e607",
"text": "Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.",
"title": ""
},
{
"docid": "da9b9a32db674e5f6366f6b9e2c4ee10",
"text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.",
"title": ""
},
{
"docid": "5759152f6e9a9cb1e6c72857e5b3ec54",
"text": "Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.",
"title": ""
},
{
"docid": "4b570eb16d263b2df0a8703e9135f49c",
"text": "ions. They also presume that consumers carefully calculate the give and get components of value, an assumption that did not hold true for most consumers in the exploratory study. Price as a Quality Indicator Most experimental studies related to quality have focused on price as the key extrinsic quality signal. As suggested in the propositions, price is but one of several potentially useful extrinsic cues; brand name or package may be equally or more important, especially in packaged goods. Further, evidence of a generalized price-perceived quality relationship is inconclusive. Quality research may benefit from a de-emphasis on price as the main extrinsic quality indicator. Inclusion of other important indicators, as well as identification of situations in which each of those indicators is important, may provide more interesting and useful answers about the extrinsic signals consumers use. Management Implications An understanding of what quality and value mean to consumers offers the promise of improving brand positions through more precise market analysis and segmentation, product planning, promotion, and pricing strategy. The model presented here suggests the following strategies that can be implemented to understand and capitalize on brand quality and value. Close the Quality Perception Gap Though managers increasingly acknowledge the importance of quality, many continue to define and measure it from the company's perspective. Closing the gap between objective and perceived quality requires that the company view quality the way the consumer does. Research that investigates which cues are important and how consumers form impressions of qualConsumer Perceptions of Price, Quality, and Value / 17 ity based on those technical, objective cues is necessary. Companies also may benefit from research that identifies the abstract dimensions of quality desired by consumers in a product class. Identify Key Intrinsic and Extrinsic Attribute",
"title": ""
},
{
"docid": "3571e2646d76d5f550075952cb75ba30",
"text": "Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima",
"title": ""
},
{
"docid": "74af5749afb36c63dbf38bb8118807c9",
"text": "Modern mobile platforms like Android enable applications to read aggregate power usage on the phone. This information is considered harmless and reading it requires no user permission or notification. We show that by simply reading the phone’s aggregate power consumption over a period of a few minutes an application can learn information about the user’s location. Aggregate phone power consumption data is extremely noisy due to the multitude of components and applications that simultaneously consume power. Nevertheless, by using machine learning algorithms we are able to successfully infer the phone’s location. We discuss several ways in which this privacy leak can be remedied.",
"title": ""
},
{
"docid": "0a7f93e98e1d256ea6a4400f33753d6a",
"text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building",
"title": ""
},
{
"docid": "3192a76e421d37fbe8619a3bc01fb244",
"text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)",
"title": ""
},
{
"docid": "4b30695ba1989cb6770a38afca685aaa",
"text": "Prior literature on search advertising primarily assumes that search engines know advertisers’ click-through rates, the probability that a consumer clicks on an advertiser’s ad. This information, however, is not available when a new advertiser starts search advertising for the first time. In particular, a new advertiser’s click-through rate can be learned only if the advertiser’s ad is shown to enough consumers, i.e., the advertiser wins enough auctions. Since search engines use advertisers’ expected click-through rates when calculating payments and allocations, the lack of information about a new advertiser can affect new and existing advertisers’ bidding strategies. In this paper, we use a game theory model to analyze advertisers’ strategies, their payoffs, and the search engine’s revenue when a new advertiser joins the market. Our results indicate that a new advertiser should always bid higher (sometimes above its valuation) when it starts search advertising. However, the strategy of an existing advertiser, i.e., an incumbent, depends on its valuation and click-through rate. A strong incumbent increases its bid to prevent the search engine from learning the new advertiser’s clickthrough rate, whereas a weak incumbent decreases its bid to facilitate the learning process. Interestingly, we find that, under certain conditions, the search engine benefits from not knowing the new advertiser’s click-through rate because its ignorance could induce the advertisers to bid more aggressively. Nonetheless, the search engine’s revenue sometimes decreases because of this lack of information, particularly, when the incumbent is sufficiently strong. We show that the search engine can mitigate this loss, and improve its total profit, by offering free advertising credit to new advertisers.",
"title": ""
}
] |
scidocsrr
|
c955843a47af728eb92fff84d62a4226
|
A Review of Mobile HCI Research Methods
|
[
{
"docid": "b2bcf059713aaa9802f9d8e7793106dd",
"text": "A framework is presented for analyzing most of the experimental work performed in software engineering over the past several years. The framework of experimentation consists of four categories corresponding to phases of the experimentation process: definition, planning, operation, and interpretation. A variety of experiments are described within the framework and their contribution to the software engineering discipline is discussed. Some recommendations for the application of the experimental process in software engineering are included.",
"title": ""
}
] |
[
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "28c1099dea57540a184af8d68cacd21d",
"text": "It is commonly held that implicit knowledge expresses itself as fluency. A perceptual clarification task was used to examine the relationship between perceptual processing fluency, subjective familiarity, and grammaticality judgments in a task frequently used to produce implicit knowledge, artificial grammar learning (AGL). Four experiments examined the effects of naturally occurring differences and manipulated differences in perceptual fluency, where decisions were based on a brief exposure to test-strings (during the clarification task only) or normal exposure. When perceptual fluency was not manipulated, it was weakly related to familiarity and grammaticality judgments, but unrelated to grammatical status and hence not a source of accuracy. Counterbalanced grammatical and ungrammatical strings did not differ in perceptual fluency but differed substantially in subjective familiarity. When fluency was manipulated, faster clarifying strings were rated as more familiar and were more often endorsed as grammatical but only where exposure was brief. Results indicate that subjective familiarity derived from a source other than perceptual fluency, is the primary basis for accuracy in AGL. Perceptual fluency is found to be a dumb heuristic influencing responding only in the absence of actual implicit knowledge.",
"title": ""
},
{
"docid": "3171587b5b4554d151694f41206bcb4e",
"text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.",
"title": ""
},
{
"docid": "18403ce2ebb83b9207a7cece82e91ffc",
"text": "Hate speech in the form of racism and sexism is commonplace on the internet (Waseem and Hovy, 2016). For this reason, there has been both an academic and an industry interest in detection of hate speech. The volume of data to be reviewed for creating data sets encourages a use of crowd sourcing for the annotation efforts. In this paper, we provide an examination of the influence of annotator knowledge of hate speech on classification models by comparing classification results obtained from training on expert and amateur annotations. We provide an evaluation on our own data set and run our models on the data set released by Waseem and Hovy (2016). We find that amateur annotators are more likely than expert annotators to label items as hate speech, and that systems trained on expert annotations outperform systems trained on amateur annotations.",
"title": ""
},
{
"docid": "211058f2d0d5b9cf555a6e301cd80a5d",
"text": "We present a method based on header paths for efficient and complete extraction of labeled data from tables meant for humans. Although many table configurations yield to the proposed syntactic analysis, some require access to semantic knowledge. Clicking on one or two critical cells per table, through a simple interface, is sufficient to resolve most of these problem tables. Header paths, a purely syntactic representation of visual tables, can be transformed (\"factored\") into existing representations of structured data such as category trees, relational tables, and RDF triples. From a random sample of 200 web tables from ten large statistical web sites, we generated 376 relational tables and 34,110 subject-predicate-object RDF triples.",
"title": ""
},
{
"docid": "cf1beda3b3f03b59cefba4aecff92fe2",
"text": "Multi-modal data is becoming more common in big data background. Finding the semantically similar objects from different modality is one of the heart problems of multi-modal learning. Most of the current methods try to learn the intermodal correlation with extrinsic supervised information, while intrinsic structural information of each modality is neglected. The performance of these methods heavily depends on the richness of training samples. However, obtaining the multi-modal training samples is still a labor and cost intensive work. In this paper, we bring a extrinsic correlation between the space structures of each modalities in coreference resolution. With this correlation, a semisupervised learning model for multi-modal coreference resolution is proposed. We firstly extract high-level features of images and text, then compute the distances of each object from some reference points to build the space structure of each modality. With a shared reference point set, the space structures of each modality are correlated. We employ the correlation to build a commonly shared space that the semantic distance between multimodal objects can be computed directly. The experiments on two multi-modal datasets show that our model performs better than the existing methods with insufficient training data.",
"title": ""
},
{
"docid": "c00470d69400066d11374539052f4a86",
"text": "When individuals learn facts (e.g., foreign language vocabulary) over multiple study sessions, the temporal spacing of study has a significant impact on memory retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield lower cued-recall accuracy than intermediate intervals. Appropriate spacing of study can double retention on educationally relevant time scales. We introduce a Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM’s prediction is based on empirical data characterizing forgetting of the material following a single study session. MCM is a synthesis of two existing memory models (Staddon, Chelaru, & Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated and incompatible, but we show they share a core feature that allows them to be integrated. MCM can determine study schedules that maximize the durability of learning, and has implications for education and training. MCM can be cast either as a neural network with inputs that fluctuate over time, or as a cascade of leaky integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account for human declarative memory.",
"title": ""
},
{
"docid": "479f00e59bdc5744c818e29cdf446df3",
"text": "A new algorithm for Support Vector regression is described. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with non-constant radius. The algorithm is analysed theoretically and experimentally.",
"title": ""
},
{
"docid": "5b0842894cbf994c3e63e521f7352241",
"text": "The burgeoning field of genomics has revived interest in multiple testing procedures by raising new methodological and computational challenges. For example, microarray experiments generate large multiplicity problems in which thousands of hypotheses are tested simultaneously. Westfall and Young (1993) propose resampling-based p-value adjustment procedures which are highly relevant to microarray experiments. This article discusses different criteria for error control in resampling-based multiple testing, including (a) the family wise error rate of Westfall and Young (1993) and (b) the false discovery rate developed by Benjamini and Hochberg (1995), both from a frequentist viewpoint; and (c) the positive false discovery rate of Storey (2002a), which has a Bayesian motivation. We also introduce our recently developed fast algorithm for implementing the minP adjustment to control family-wise error rate. Adjusted p-values for different approaches are applied to gene expression data from two recently published microarray studies. The properties of these procedures for multiple testing are compared.",
"title": ""
},
{
"docid": "5e135da54b6ba5e9005d61bd64bbd2c9",
"text": "A miniaturized Marchand balun combiner is proposed for a W-band power amplifier (PA). The proposed combiner reduces the electrical length of the transmission lines (transmission line) from about 80 <sup>°</sup> to 30 <sup>°</sup>, when compared with a conventional Marchand balun combiner. Implemented in a 1-V 65-nm CMOS process, the presented PA achieves a measured saturated output power of 11.9 dBm and a peak power-added efficiency of 9.0% at 87 GHz. The total chip area (with pads) is 0.77×0.48 mm<sup>2</sup>, where the size of the balun combiner is only 0.36×0.13 mm<sup>2</sup>.",
"title": ""
},
{
"docid": "2b1eda1c5a0bb050b82f5fa42893466b",
"text": "In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings such as the SQuAD (Rajpurkar et al. 2016) dataset, which provides a preselected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia) instead of a pre-selected passage (Chen et al. 2017a). This setting is more complex as it requires large-scale search for relevant passages by an information retrieval component, combined with a reading comprehension model that “reads” the passages to generate an answer to the question. Performance in this setting lags well behind closed-domain performance. In this paper, we present a novel open-domain QA system called Reinforced Ranker-Reader (R), based on two algorithmic innovations. First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of extracting the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker along with an answer-extraction Reader model, based on reinforcement learning. We report extensive experimental results showing that our method significantly improves on the state of the art for multiple open-domain QA datasets. 2",
"title": ""
},
{
"docid": "c86f477a1a2900a1b3d5dc80974c6f7c",
"text": "The understanding of the metal and transition metal dichalcogenide (TMD) interface is critical for future electronic device technologies based on this new class of two-dimensional semiconductors. Here, we investigate the initial growth of nanometer-thick Pd, Au, and Ag films on monolayer MoS2. Distinct growth morphologies are identified by atomic force microscopy: Pd forms a uniform contact, Au clusters into nanostructures, and Ag forms randomly distributed islands on MoS2. The formation of these different interfaces is elucidated by large-scale spin-polarized density functional theory calculations. Using Raman spectroscopy, we find that the interface homogeneity shows characteristic Raman shifts in E2g(1) and A1g modes. Interestingly, we show that insertion of graphene between metal and MoS2 can effectively decouple MoS2 from the perturbations imparted by metal contacts (e.g., strain), while maintaining an effective electronic coupling between metal contact and MoS2, suggesting that graphene can act as a conductive buffer layer in TMD electronics.",
"title": ""
},
{
"docid": "99cb4f69fb7b6ff16c9bffacd7a42f4d",
"text": "Single cell segmentation is critical and challenging in live cell imaging data analysis. Traditional image processing methods and tools require time-consuming and labor-intensive efforts of manually fine-tuning parameters. Slight variations of image setting may lead to poor segmentation results. Recent development of deep convolutional neural networks(CNN) provides a potentially efficient, general and robust method for segmentation. Most existing CNN-based methods treat segmentation as a pixel-wise classification problem. However, three unique problems of cell images adversely affect segmentation accuracy: lack of established training dataset, few pixels on cell boundaries, and ubiquitous blurry features. The problem becomes especially severe with densely packed cells, where a pixel-wise classification method tends to identify two neighboring cells with blurry shared boundary as one cell, leading to poor cell count accuracy and affecting subsequent analysis. Here we developed a different learning strategy that combines strengths of CNN and watershed algorithm. The method first trains a CNN to learn Euclidean distance transform of binary masks corresponding to the input images. Then another CNN is trained to detect individual cells in the Euclidean distance transform. In the third step, the watershed algorithm takes the outputs from the previous steps as inputs and performs the segmentation. We tested the combined method and various forms of the pixel-wise classification algorithm on segmenting fluorescence and transmitted light images. The new method achieves similar pixel accuracy but significant higher cell count accuracy than pixel-wise classification methods do, and the advantage is most obvious when applying on noisy images of densely packed cells.",
"title": ""
},
{
"docid": "6d3410de121ffe037eafd5f30daa7252",
"text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.",
"title": ""
},
{
"docid": "cbed500143b9d37049329b4f26f4833e",
"text": "In this paper, we study the problem of robust feature extraction based on l2,1 regularized correntropy in both theoretical and algorithmic manner. In theoretical part, we point out that an l2,1-norm minimization can be justified from the viewpoint of half-quadratic (HQ) optimization, which facilitates convergence study and algorithmic development. In particular, a general formulation is accordingly proposed to unify l1-norm and l2,1-norm minimization within a common framework. In algorithmic part, we propose an l2,1 regularized correntropy algorithm to extract informative features meanwhile to remove outliers from training data. A new alternate minimization algorithm is also developed to optimize the non-convex correntropy objective. In terms of face recognition, we apply the proposed method to obtain an appearance-based model, called Sparse-Fisherfaces. Extensive experiments show that our method can select robust and sparse features, and outperforms several state-of-the-art subspace methods on largescale and open face recognition datasets.",
"title": ""
},
{
"docid": "8a7f4cde54d120aab50c9d4f45e67a43",
"text": "The purpose of this study was to assess the perceived discomfort of patrol officers related to equipment and vehicle design and whether there were discomfort differences between day and night shifts. A total of 16 participants were recruited (10 males, 6 females) from a local police force to participate for one full day shift and one full night shift. A series of questionnaires were administered to acquire information regarding comfort with specific car features and occupational gear, body part discomfort and health and lifestyle. The discomfort questionnaires were administered three times during each shift to monitor discomfort progression within a shift. Although there were no significant discomfort differences reported between the day and night shifts, perceived discomfort was identified for specific equipment, vehicle design and vehicle configuration, within each 12-h shift.",
"title": ""
},
{
"docid": "9e91f7e57e074ec49879598c13035d70",
"text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.",
"title": ""
},
{
"docid": "4211e323e2efac1a08d8caae607f737d",
"text": "Mean reversion is a feature largely recognized for the price processes of many financial securities and especially commodities. In the literature there are examples where some simple speculative strategies, before transaction costs, were devised to earn excess returns from such price processes. Actually, the gain opportunities of mean reversion must be corrected to account for transaction costs, which may represent a major issue. In this work we try to determine sufficient conditions for the parameters of a mean reverting price process as a function of transaction costs, to allow a speculative trader to have positive expectations when deciding to take a position. We estimate the mean reverting parameters for some commodities and correct them for transaction costs to assess whether the potential inefficiency is actually relevant for speculative purposes. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7bdc8d864e370f96475dc7d5078b053c",
"text": "Nowadays, there is a trend to design complex, yet secure systems. In this context, the Trusted Execution Environment (TEE) was designed to enrich the previously defined trusted platforms. TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the system. However, TEE still lacks a precise definition as well as representative building blocks that systematize its design. Existing definitions of TEE are largely inconsistent and unspecific, which leads to confusion in the use of the term and its differentiation from related concepts, such as secure execution environment (SEE). In this paper, we propose a precise definition of TEE and analyze its core properties. Furthermore, we discuss important concepts related to TEE, such as trust and formal verification. We give a short survey on the existing academic and industrial ARM TrustZone-based TEE, and compare them using our proposed definition. Finally, we discuss some known attacks on deployed TEE as well as its wide use to guarantee security in diverse applications.",
"title": ""
}
] |
scidocsrr
|
05c15ef1162c588203dbf7ce40b96316
|
Image Based Mango Fruit Detection, Localisation and Yield Estimation Using Multiple View Geometry
|
[
{
"docid": "0a58548ceecaa13e1c77a96b4d4685c4",
"text": "Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of 0.791 was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of 0.858. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of r = 0.826.",
"title": ""
},
{
"docid": "1e59845e5a3f5e84fd63eddbc135aa4c",
"text": "This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit.",
"title": ""
},
{
"docid": "6ba73f29a71cda57450f1838ef012356",
"text": "Addressing the challenges of feeding the burgeoning world population with limited resources requires innovation in sustainable, efficient farming. The practice of precision agriculture offers many benefits towards addressing these challenges, such as improved yield and efficient use of such resources as water, fertilizer and pesticides. We describe the design and development of a light-weight, multi-spectral 3D imaging device that can be used for automated monitoring in precision agriculture. The sensor suite consists of a laser range scanner, multi-spectral cameras, a thermal imaging camera, and navigational sensors. We present techniques to extract four key data products - plant morphology, canopy volume, leaf area index, and fruit counts - using the sensor suite. We demonstrate its use with two systems: multi-rotor micro aerial vehicles and on a human-carried, shoulder-mounted harness. We show results of field experiments conducted in collaboration with growers and agronomists in vineyards, apple orchards and orange groves.",
"title": ""
}
] |
[
{
"docid": "b403f37f0c27d4fe2b0f398c4c72f7a6",
"text": "In this work we present a novel approach to predict the function of proteins in protein-protein interaction (PPI) networks. We classify existing approaches into inductive and transductive approaches, and into local and global approaches. As of yet, among the group of inductive approaches, only local ones have been proposed for protein function prediction. We here introduce a protein description formalism that also includes global information, namely information that locates a protein relative to specific important proteins in the network. We analyze the effect on function prediction accuracy of selecting a different number of important proteins. With around 70 important proteins, even in large graphs, our method makes good and stable predictions. Furthermore, we investigate whether our method also classifies proteins accurately on more detailed function levels. We examined up to five different function levels. The method is benchmarked on four datasets where we found classification performance according to F-measure values indeed improves by 9 percent over the benchmark methods employed.",
"title": ""
},
{
"docid": "5b4def6b0a13152578198b41da0cdecf",
"text": "For autonomous vehicles, the ability to detect and localize surrounding vehicles is critical. It is fundamental for further processing steps like collision avoidance or path planning. This paper introduces a convolutional neural network- based vehicle detection and localization method using point cloud data acquired by a LIDAR sensor. Acquired point clouds are transformed into bird's eye view elevation images, where each pixel represents a grid cell of the horizontal x-y plane. We intentionally encode each pixel using three channels, namely the maximal, median and minimal height value of all points within the respective grid. A major advantage of this three channel representation is that it allows us to utilize common RGB image-based detection networks without modification. The bird's eye view elevation images are processed by a two stage detector. Due to the nature of the bird's eye view, each pixel of the image represent ground coordinates, meaning that the bounding box of detected vehicles correspond directly to the horizontal position of the vehicles. Therefore, in contrast to RGB-based detectors, we not just detect the vehicles, but simultaneously localize them in ground coordinates. To evaluate the accuracy of our method and the usefulness for further high-level applications like path planning, we evaluate the detection results based on the localization error in ground coordinates. Our proposed method achieves an average precision of 87.9% for an intersection over union (IoU) value of 0.5. In addition, 75% of the detected cars are localized with an absolute positioning error of below 0.2m.",
"title": ""
},
{
"docid": "4eaf40cdef12d0d2be1d3c6a96c94841",
"text": "Acknowledgements in research publications, like citations, indicate influential contributions to scientific work; however, large-scale acknowledgement analyses have traditionally been impractical due to the high cost of manual information extraction. In this paper we describe a mixture method for automatically mining acknowledgements from research documents using a combination of a Support Vector Machine and regular expressions. The algorithm has been implemented as a plug-in to the CiteSeer Digital Library and the extraction results have been integrated with the traditional metadata and citation index of the CiteSeer system. As a demonstration, we use CiteSeer's autonomous citation indexing (ACI) feature to measure the relative impact of acknowledged entities, and present the top twenty acknowledged entities within the archive.",
"title": ""
},
{
"docid": "ea596b23af4b34fdb6a9986a03730d99",
"text": "In the past few years, recommender systems and semantic web technologies have become main subjects of interest in the research community. In this paper, we present a domain independent semantic similarity measure that can be used in the recommendation process. This semantic similarity is based on the relations between the individuals of an ontology. The assessment can be done offline which allows time to be saved and then, get real-time recommendations. The measure has been experimented on two different domains: movies and research papers. Moreover, the generated recommendations by the semantic similarity have been evaluated by a set of volunteers and the results have been promising.",
"title": ""
},
{
"docid": "b9226935d2802a7b9a23ce159190f525",
"text": "Accurate diagnosis is crucial for successful treatment of the brain tumor. Accordingly in this paper, we propose an intelligent content-based image retrieval (CBIR) system which retrieves similar pathology bearing magnetic resonance (MR) images of the brain from a medical database to assist the radiologist in the diagnosis of the brain tumor. A single feature vector will not perform well for finding similar images in the medical domain as images within the same disease class differ by severity, density and other such factors. To handle this problem, the proposed CBIR system uses a two-step approach to retrieve similar MR images. The first step classifies the query image as benign or malignant using the features that discriminate the classes. The second step then retrieves the most similar images within the predicted class using the features that distinguish the subclasses. In order to provide faster image retrieval, we propose an indexing method called clustering with principal component analysis (PCA) and KD-tree which groups subclass features into clusters using modified K-means clustering and separately reduces the dimensionality of each cluster using PCA. The reduced feature set is then indexed using a KD-tree. The proposed CBIR system is also made robust against misalignment that occurs during MR image acquisition. Experiments were carried out on a database consisting of 820 MR images of the brain tumor. The experimental results demonstrate the effectiveness of the proposed system and show the viability of clinical application.",
"title": ""
},
{
"docid": "8ed2bb129f08657b896f5033c481db8f",
"text": "simple and fast reflectional symmetry detection algorithm has been developed in this Apaper. The algorithm employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetry axes of an object in the image. The directions of the symmetry axes are obtained from the gradient orientation histogram of the input gray scale image by using the Fourier method. Both synthetic and real images have been tested using the proposed algorithm.",
"title": ""
},
{
"docid": "226f84ed038a4509d9f3931d7df8b977",
"text": "Physically Asynchronous/Logically Synchronous (PALS) is an architecture pattern that allows developers to design and verify a system as though all nodes executed synchronously. The correctness of PALS protocol was formally verified. However, the implementation of PALS adds additional code that is otherwise not needed. In our case, we have a middleware (PALSWare) that supports PALS systems. In this paper, we introduce a verification framework that shows how we can apply Software Model Checking (SMC) to verify a PALS system at the source code level. SMC is an automated and exhaustive source code checking technology. Compared to verifying (hardware or software) models, verifying the actual source code is more useful because it minimizes any chance of false interpretation and eliminates the possibility of missing software bugs that were absent in the model but introduced during implementation. In other words, SMC reduces the semantic gap between what is verified and what is executed. Our approach is compositional, i.e., the verification of PALSWare is done separately from applications. Since PALSWare is inherently concurrent, to verify it via SMC we must overcome the statespace explosion problem, which arises from concurrency and asynchrony. To this end, we develop novel simplification abstractions, prove their soundness, and then use these abstractions to reduce the verification of a system with many threads to verifying a system with a relatively small number of threads. When verifying an application, we leverage the (already verified) synchronicity guarantees provided by the PALSWare to reduce the verification complexity significantly. Thus, our approach uses both “abstraction” and “composition”, the two main techniques to reduce statespace explosion. This separation between verification of PALSWare and applications also provides better management against upgrades to either. We validate our approach by verifying the current PALSWare implementation, and several PALSWare-based distributed real time applications.",
"title": ""
},
{
"docid": "3509f0bb534fbb5da5b232b91d81c8e9",
"text": "BACKGROUND\nBlighia sapida is a woody perennial multipurpose fruit tree species native to the Guinean forests of West Africa. The fleshy arils of the ripened fruits are edible. Seeds and capsules of the fruits are used for soap-making and all parts of the tree have medicinal properties. Although so far overlooked by researchers in the region, the tree is highly valued by farmers and is an important component of traditional agroforestry systems in Benin. Fresh arils, dried arils and soap are traded in local and regional markets in Benin providing substantial revenues for farmers, especially women. Recently, ackee has emerged as high-priority species for domestication in Benin but information necessary to elaborate a clear domestication strategy is still very sketchy. This study addresses farmers' indigenous knowledge on uses, management and perception of variation of the species among different ethnic groups taking into account also gender differences.\n\n\nMETHODS\n240 randomly selected persons (50% women) belonging to five different ethnic groups, 5 women active in the processing of ackee fruits and 6 traditional healers were surveyed with semi-structured interviews. Information collected refer mainly to the motivation of the respondents to conserve ackee trees in their land, the local uses, the perception of variation, the preference in fruits traits, the management practices to improve the production and regenerate ackee.\n\n\nRESULTS\nPeople have different interests on using ackee, variable knowledge on uses and management practices, and have reported nine differentiation criteria mainly related to the fruits. Ackee phenotypes with preferred fruit traits are perceived by local people to be more abundant in managed in-situ and cultivated stands than in unmanaged wild stands, suggesting that traditional management has initiated a domestication process. As many as 22 diseases have been reported to be healed with ackee. In general, indigenous knowledge about ackee varies among ethnic and gender groups.\n\n\nCONCLUSIONS\nWith the variation observed among ethnic groups and gender groups for indigenous knowledge and preference in fruits traits, a multiple breeding sampling strategy is recommended during germplasm collection and multiplication. This approach will promote sustainable use and conservation of ackee genetic resources.",
"title": ""
},
{
"docid": "f698b77df48a5fac4df7ba81b4444dd5",
"text": "Discontinuous-conduction mode (DCM) operation is usually employed in DC-DC converters for small inductor on printed circuit board (PCB) and high efficiency at light load. However, it is normally difficult for synchronous converter to realize the DCM operation, especially in high frequency applications, which requires a high speed and high precision comparator to detect the zero crossing point at cost of extra power losses. In this paper, a novel zero current detector (ZCD) circuit with an adaptive delay control loop for high frequency synchronous buck converter is presented. Compared to the conventional ZCD, proposed technique is proven to offer 8.5% efficiency enhancement when performed in a buck converter at the switching frequency of 4MHz and showed less sensitivity to the transistor mismatch of the sensor circuit.",
"title": ""
},
{
"docid": "97a4202d9dd2fe645e5d118449c92319",
"text": "In present scenario, the Indian government has announced the demonetization of all Rs 500 and Rs 1000, in reserve bank notes of Mahatma Gandhi series. Indian government has introduced a new Rs 500 and Rs 2000, to reduce fund illegal activity in India. Even then the new notes of fake or bogus currency are circulated in the society. The main objective of this work is used to identify fake currencies among the real. From the currency, the strip lines or continuous lines are detected from real and fake note by using edge detection techniques. HSV techniques are used to saturate the value of an input image. To achieve the enhance reliability and dynamic way in detecting the counterfeit currency.",
"title": ""
},
{
"docid": "496501d679734b90dd9fd881389fcc34",
"text": "Learning is often identified with the acquisition, encoding, or construction of new knowledge, while retrieval is often considered only a means of assessing knowledge, not a process that contributes to learning. Here, we make the case that retrieval is the key process for understanding and for promoting learning. We provide an overview of recent research showing that active retrieval enhances learning, and we highlight ways researchers have sought to extend research on active retrieval to meaningful learning—the learning of complex educational materials as assessed on measures of inference making and knowledge application. However, many students lack metacognitive awareness of the benefits of practicing active retrieval. We describe two approaches to addressing this problem: classroom quizzing and a computer-based learning program that guides students to practice retrieval. Retrieval processes must be considered in any analysis of learning, and incorporating retrieval into educational activities represents a powerful way to enhance learning.",
"title": ""
},
{
"docid": "8848ddd97501ff8aa5e571852e7fb447",
"text": "Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems. They must use little energy and be robust to environmental conditions, while also providing common services that make it easy to write applications. In TinyOS, the current state of the art in sensor node operating systems, reusable components implement common services, but each node runs a single statically-linked system image, making it hard to run multiple applications or incrementally update applications. We present SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum. SOS consists of dynamically-loaded modules and a common kernel, which implements messaging, dynamic memory, and module loading and unloading, among other services. Modules are not processes: they are scheduled cooperatively and there is no memory protection. Nevertheless, the system protects against common module bugs using techniques such as typed entry points, watchdog timers, and primitive resource garbage collection. Individual modules can be added and removed with minimal system interruption. We describe SOS's design and implementation, discuss tradeoffs, and compare it with TinyOS and with the Maté virtual machine. Our evaluation shows that despite the dynamic nature of SOS and its higher-level kernel interface, its long term total usage nearly identical to that of systems such as Matè and TinyOS.",
"title": ""
},
{
"docid": "108058f1814d7520003b44f1ffc99cb5",
"text": "The process of acquiring the energy surrounding a system and converting it into usable electrical energy is termed power harvesting. In the last few years, there has been a surge of research in the area of power harvesting. This increase in research has been brought on by the modern advances in wireless technology and low-power electronics such as microelectromechanical systems. The advances have allowed numerous doors to open for power harvesting systems in practical real-world applications. The use of piezoelectric materials to capitalize on the ambient vibrations surrounding a system is one method that has seen a dramatic rise in use for power harvesting. Piezoelectric materials have a crystalline structure that provides them with the ability to transform mechanical strain energy into electrical charge and, vice versa, to convert an applied electrical potential into mechanical strain. This property provides these materials with the ability to absorb mechanical energy from their surroundings, usually ambient vibration, and transform it into electrical energy that can be used to power other devices. While piezoelectric materials are the major method of harvesting energy, other methods do exist; for example, one of the conventional methods is the use of electromagnetic devices. In this paper we discuss the research that has been performed in the area of power harvesting and the future goals that must be achieved for power harvesting systems to find their way into everyday use.",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "46613dd249ed10d84b7be8c1b46bf5b4",
"text": "Today, a predictive controller becomes one of the state of the art in power electronics control techniques. The performance of this powerful control approach will be pushed forward by simplifying the main control criterion and objective function, and decreasing the number of calculations per sampling time. Recently, predictive control has been incorporated in the Z-source inverter (ZSI) family. For example, in quasi ZSI, the inverter capacitor voltage, inductor current, and output load currents are controlled to their setting points through deciding the required state; active or shoot through. The proposed algorithm reduces the number of calculations, where it decides the shoot-through (ST) case without checking the other possible states. The ST case is roughly optimized every two sampling periods. Through the proposed strategy, about 50% improvement in the computational power has been achieved as compared with the previous algorithm. Also, the objective function for the proposed algorithm consists of one weighting factor for the capacitor voltage without involving the inductor current term in the main objective function. The proposed algorithm is investigated with the simulation results based on MATLAB/SIMULINK software. A prototype of qZSI is constructed in the laboratory to obtain the experimental results using the Digital Signal Processor F28335.",
"title": ""
},
{
"docid": "6819116197ba7a081922ef33175c8882",
"text": "The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-the-art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6× compression ratio and 90× speed-up in inference, making this cumbersome model applicable on portable devices. Introduction As the emergence of big training data, Deep Neural Networks (DNNs) recently attained great breakthroughs in face recognition [23, 20, 21, 22, 19, 15, 29, 30, 28] and become applicable in many commercial platforms such as social networks, e-commerce, and search engines. To absorb massive supervision from big training data, existing works typically trained a large DNN or a DNN ensemble, where each DNN consists of millions of parameters. Nevertheless, as face recognition shifts toward mobile and embedded devices, large DNNs are computationally expensive, which prevents them from being deployed to these devices. It motivates research of using a small network to fit very large training ∗indicates co-first authors who contributed equally. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. data. This work addresses model compression of DNNs for face recognition, by incorporating domain knowledge of learning face representation. There have been several attempts [1, 7, 18] in literature to compress DNNs, so as to make their deployments easier, where a single network (i.e. a student) was trained by using the knowledge learned with a large DNN or a DNN ensemble (i.e. a teacher) as supervision. This knowledge can be simply represented as the probabilities of label predictions by employing the softmax function [10]. Compared with the original 1-of-K hard labels, the label probabilities encode richer relative similarities among training samples and can train a DNN more effectively. However, this representation loses much information because most of the probabilities are close to zeros after squashed by softmax. To overcome this problem, Ba and Caruana [1] represented the learned knowledge by using the logits, which are the values before softmax activation but zero-meaned, revealing the relationship between labels as well as the similarities among samples in the logit space. However, as these unconstrained values (e.g. the large negatives) may contain noisy information that overfits the training data, using them as supervision limits the generalization ability of the student. Recently, Hinton et al. [7] showed that both the label probabilities and zero-meaned logits are two extreme outputs of the softmax functions, where the temperature becomes one and positive infinity, respectively. To remove target noise, they empirically searched for a suitable temperature in the softmax function, until it produced soften probabilities that were able to disclose the similarity structure of data. As these soften target labels comprise much valuable information, a single student trained on them is able to mimic the performance of a cumbersome network ensemble. Despite the successes of [7], our empirical results show that training on soft targets is difficult to converge when compressing DNNs for face recognition. Previous studies [23, 24, 20, 19] have shown that the face representation learned from classifying larger amount of identities in the training data (e.g. 250 thousand in [24]) may have better generalization capacity. In face recognition, it seems difficult to fit soft targets with high dimensionality, which makes convergence slow. In this work, we show that instead of using soft targets in the output layer, the knowledge of the teacher can also be obtained from the neurons in the top hidden layer, which preserve as much information as the soft targets (as the soft targets are predicted from these neurons) but are more compact, e.g. 512 versus 12,994 according to the net structure in [21]. As these neurons may contain noise or information not relevant to face recognition, they are further selected according to the usefulness of knowledge captured by them. In particular, the selection is motivated by three original observations (domain knowledge) of face representation disclosed in this work, which are naturally generalized to all DNNs trained by distinguishing massive identities, such as [19, 23, 24, 22]. (1) Deeply learned face representation by the face recognition task is a distributed representation [6] over face attributes, including the identity-related attributes (IA), such as gender, race, and shapes of facial components, as well as the identity non-related attributes (NA), such as expression, lighting, and photo quality. This observation implies that each attribute concept is explained by having some neurons being activated while each neuron is involved in representing more than one attribute, although attribute labels are not provided during training. (2) However, a certain amount of neurons are selective to NA or both NA and IA, implying that the distributed representation is neither invariant nor completely factorized, because attributes in NA are variations that should be removed in face recognition, whereas these two factors (NA and IA) are presented and coupled in some neurons. (3) Furthermore, a small amount of neurons are inhibitive to all attributes and server as noise. With these observations, we cast neuron selection as inference on a fully-connected graph, where each node represents attribute-selectiveness of neuron and each edge represents correlation between neurons. An efficient mean field algorithm [9] enables us to select neurons that are more selective or discriminative to IA, but less correlated with each other. As a result, the features of the selected neurons are able to maintain the inter-personal discriminativeness (i.e. distributed and factorized to explain IA), while reducing intra-personal variations (i.e. invariant to NA). We employ the features after neuron selection as regression targets to train the student. To evaluate neuron selection, we employ DeepID2+ [21] as a teacher (T1), which achieved state-of-the-art performance on LFW benchmark [8]. This work is chosen as an example because it successfully incorporated multiple complex components for face recognition, such as local convolution [12], ranking loss function [19], deeply supervised learning [13], and model ensemble [17]. The effectiveness of all these components in face recognition have been validated by many existing works [19, 23, 24, 27]. Evaluating neuron selection on it demonstrates its capacity and generalization ability on mimicking functions induced by different learning strategies in face recognition. With neuron selection, a student with simple network structure is able to outperform a single network of T1 or its ensemble. Interestingly, this simple student generalizes well to mimic a deeper teacher (T2), DeepID3 [22], which is a recent extension of DeepID2+. Although there are other advanced methods [24, 19] in face recognition, [21, 22] are more suitable to be taken as baselines. They outperformed [24] and achieved comparable result with [19] on LFW with much smaller size of training data and identities, i.e. 290K images [21] compares to 7.5M images [24] and 200M images [19]. We cannot compare with [24, 19] because their data are unavailable. Three main contributions of this work are summarized as below. (1) We demonstrate that more compact supervision converge more efficiently, when compressing DNNs for face recognition. Soft targets are difficult to fit because of high dimensionality. Instead, neurons in the top hidden layers are proper supervision, as they capture as much information as soft targets but more compact. (2) Three valuable observations are disclosed from the deeply learned face representation, identifying the usefulness of knowledge captured in these neurons. These observations are naturally generalized to all DNNs trained on face images. (3) With these observations, an efficient neuron selection method is proposed for model compression and its effectiveness is validated on T1 and T2. Face Model Compression Training Student via Neuron Selection The merit behind our method is to select informative neurons in the top hidden layer of a teacher, and adopt the features (responses) of the chosen neurons as supervision to train a student, mimicking the teacher’s feature space. We formulate the objective function of model compression as a regression problem given a training set D = {Ii, fi}i=1,",
"title": ""
},
{
"docid": "a583c568e3c2184e5bda272422562a12",
"text": "Video games are primarily designed for the players. However, video game spectating is also a popular activity, boosted by the rise of online video sites and major gaming tournaments. In this paper, we focus on the spectator, who is emerging as an important stakeholder in video games. Our study focuses on Starcraft, a popular real-time strategy game with millions of spectators and high level tournament play. We have collected over a hundred stories of the Starcraft spectator from online sources, aiming for as diverse a group as possible. We make three contributions using this data: i) we find nine personas in the data that tell us who the spectators are and why they spectate; ii) we strive to understand how different stakeholders, like commentators, players, crowds, and game designers, affect the spectator experience; and iii) we infer from the spectators' expressions what makes the game entertaining to watch, forming a theory of distinct types of information asymmetry that create suspense for the spectator. One design implication derived from these findings is that, rather than presenting as much information to the spectator as possible, it is more important for the stakeholders to be able to decide how and when they uncover that information.",
"title": ""
},
{
"docid": "2ab848215bd066373c9da1c6c01432a8",
"text": "PURPOSE\nPersonal mobility vehicles (PMVs) are under active development. Most PMVs are wheel-driven, a mode of transport notable for its efficiency. However, wheeled PMVs tend to have poor mobility against negotiating obstacles. The four-wheeled vehicle RT-Mover PType 3 has been developed featuring wheeled legs capable of leg motion. This allows the PMV to overcome uneven terrains, including a step approached at an angle, which ordinary wheelchairs cannot negotiate.\n\n\nMETHOD\nThis article discusses a gait algorithm in which a leg executes the necessary leg motion when optionally presented with obstacles on a road. In order to lift a wheel off the ground and perform a leg motion, the support wheels must be moved to support points to ensure that the vehicle remains stable on three wheels. When moving towards the target support point, a wheel may encounter another obstacle, and a response method for this case is also described.\n\n\nRESULTS\nTo assess the gait algorithm, several configurations of obstacles were used for performance tests with a passenger. The capabilities of the PMV were demonstrated through experiments.\n\n\nCONCLUSION\nWe proposed a novel gait algorithm for our PMV and realised the proposed motion pattern for PMV-based negotiating obstacles.\n\n\nIMPLICATIONS FOR REHABILITATION\nOur single-seat personal mobility vehicle, RT-Mover PType 3 features wheels attached on legs capable of performing leg motion, which allows the vehicle to traverse rough terrains in urban areas. We proposed a gait algorithm for RT-Mover PType 3 consisting of a series of leg motions in response to rough terrain. With this algorithm, the vehicle can traverse not only randomly placed obstacles, but also a step approached at an oblique angle, which conventional powered wheelchairs cannot navigate. Experiments with a passenger demonstrated the effectiveness of the proposed gait algorithm, suggesting that RT-Mover PType 3 can expand the mobility and range of activities of wheelchair users.",
"title": ""
},
{
"docid": "f10996698f2596de3ca7436a82e8c326",
"text": "Hybrid multiple-antenna transceivers, which combine large-dimensional analog pre/postprocessing with lower-dimensional digital processing, are the most promising approach for reducing the hardware cost and training overhead in massive MIMO systems. This article provides a comprehensive survey of the various incarnations of such structures that have been proposed in the literature. We provide a taxonomy in terms of the required channel state information, that is, whether the processing adapts to the instantaneous or average (second-order) channel state information; while the former provides somewhat better signal- to-noise and interference ratio, the latter has much lower overhead for CSI acquisition. We furthermore distinguish hardware structures of different complexities. Finally, we point out the special design aspects for operation at millimeter-wave frequencies.",
"title": ""
},
{
"docid": "c71b4a8d6d9ffc64c9e86aab40d9784f",
"text": "Voice impersonation is not the same as voice transformation, although the latter is an essential element of it. In voice impersonation, the resultant voice must convincingly convey the impression of having been naturally produced by the target speaker, mimicking not only the pitch and other perceivable signal qualities, but also the style of the target speaker. In this paper, we propose a novel neural-network based speech quality- and style-mimicry framework for the synthesis of impersonated voices. The framework is built upon a fast and accurate generative adversarial network model. Given spectrographic representations of source and target speakers' voices, the model learns to mimic the target speaker's voice quality and style, regardless of the linguistic content of either's voice, generating a synthetic spectrogram from which the time-domain signal is reconstructed using the Griffin-Lim method. In effect, this model reframes the well-known problem of style-transfer for images as the problem of style-transfer for speech signals, while intrinsically addressing the problem of durational variability of speech sounds. Experiments demonstrate that the model can generate extremely convincing samples of impersonated speech. It is even able to impersonate voices across different genders effectively. Results are qualitatively evaluated using standard procedures for evaluating synthesized voices.",
"title": ""
}
] |
scidocsrr
|
59203d746c4fd90a6243c2b40e9b5b48
|
Information Theory and the IrisCode
|
[
{
"docid": "b125649628d46871b2212c61e355ec43",
"text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.",
"title": ""
},
{
"docid": "a19f84fec74cae5573397c155e6d5789",
"text": "The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.",
"title": ""
}
] |
[
{
"docid": "dea7d83ed497fc95f4948a5aa4787b18",
"text": "The distinguishing feature of the Fog Computing (FC) paradigm is that FC spreads communication and computing resources over the wireless access network, so as to provide resource augmentation to resource and energy-limited wireless (possibly mobile) devices. Since FC would lead to substantial reductions in energy consumption and access latency, it will play a key role in the realization of the Fog of Everything (FoE) paradigm. The core challenge of the resulting FoE paradigm is tomaterialize the seamless convergence of three distinct disciplines, namely, broadband mobile communication, cloud computing, and Internet of Everything (IoE). In this paper, we present a new IoE architecture for FC in order to implement the resulting FoE technological platform. Then, we elaborate the related Quality of Service (QoS) requirements to be satisfied by the underlying FoE technological platform. Furthermore, in order to corroborate the conclusion that advancements in the envisioned architecture description, we present: (i) the proposed energy-aware algorithm adopt Fog data center; and, (ii) the obtained numerical performance, for a real-world case study that shows that our approach saves energy consumption impressively in theFog data Center compared with the existing methods and could be of practical interest in the incoming Fog of Everything (FoE) realm.",
"title": ""
},
{
"docid": "dd911eff60469b32330c5627c288f19f",
"text": "Routing Algorithms are driving the growth of the data transmission in wireless sensor networks. Contextually, many algorithms considered the data gathering and data aggregation. This paper uses the scenario of clustering and its impact over the SPIN protocol and also finds out the effect over the energy consumption in SPIN after uses of clustering. The proposed scheme is implemented using TCL/C++ programming language and evaluated using Ns2.34 simulator and compare with LEACH. Simulation shows proposed protocol exhibits significant performance gains over the LEACH for lifetime of network and guaranteed data transmission.",
"title": ""
},
{
"docid": "667a457dcb1f379abd4e355e429dc40d",
"text": "BACKGROUND\nViolent death is a serious problem in the United States. Previous research showing US rates of violent death compared with other high-income countries used data that are more than a decade old.\n\n\nMETHODS\nWe examined 2010 mortality data obtained from the World Health Organization for populous, high-income countries (n = 23). Death rates per 100,000 population were calculated for each country and for the aggregation of all non-US countries overall and by age and sex. Tests of significance were performed using Poisson and negative binomial regressions.\n\n\nRESULTS\nUS homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher. For 15- to 24-year-olds, the gun homicide rate in the United States was 49.0 times higher. Firearm-related suicide rates were 8.0 times higher in the United States, but the overall suicide rates were average. Unintentional firearm deaths were 6.2 times higher in the United States. The overall firearm death rate in the United States from all causes was 10.0 times higher. Ninety percent of women, 91% of children aged 0 to 14 years, 92% of youth aged 15 to 24 years, and 82% of all people killed by firearms were from the United States.\n\n\nCONCLUSIONS\nThe United States has an enormous firearm problem compared with other high-income countries, with higher rates of homicide and firearm-related suicide. Compared with 2003 estimates, the US firearm death rate remains unchanged while firearm death rates in other countries decreased. Thus, the already high relative rates of firearm homicide, firearm suicide, and unintentional firearm death in the United States compared with other high-income countries increased between 2003 and 2010.",
"title": ""
},
{
"docid": "ede1cfd85dbb2aaa6451128c222d99a2",
"text": "Crowdsourcing is a crowd-based outsourcing, where a requester (task owner) can outsource tasks to workers (public crowd). Recently, mobile crowdsourcing, which can leverage workers' data from smartphones for data aggregation and analysis, has attracted much attention. However, when the data volume is getting large, it becomes a difficult problem for a requester to aggregate and analyze the incoming data, especially when the requester is an ordinary smartphone user or a start-up company with limited storage and computation resources. Besides, workers are concerned about their identity and data privacy. To tackle these issues, we introduce a three-party architecture for mobile crowdsourcing, where the cloud is implemented between workers and requesters to ease the storage and computation burden of the resource-limited requester. Identity privacy and data privacy are also achieved. With our scheme, a requester is able to verify the correctness of computation results from the cloud. We also provide several aggregated statistics in our work, together with efficient data update methods. Extensive simulation shows both the feasibility and efficiency of our proposed solution.",
"title": ""
},
{
"docid": "ffec976c3556387232e0e905f06d82dd",
"text": "Much natural data is hierarchical in nature. Moreover, this hierarchy is often shared between different instances. We introduce the nested Chinese Restaurant Franchise Process to obtain both hierarchical tree-structured representations for objects, akin to (but more general than) the nested Chinese Restaurant Process while sharing their structure akin to the Hierarchical Dirichlet Process. Moreover, by decoupling the structure generating part of the process from the components responsible for the observations, we are able to apply the same statistical approach to a variety of user generated data. In particular, we model the joint distribution of microblogs and locations for Twitter for users. This leads to a 40% reduction in location uncertainty relative to the best previously published results. Moreover, we model documents from the NIPS papers dataset, obtaining excellent perplexity relative to (hierarchical) Pachinko allocation and LDA.",
"title": ""
},
{
"docid": "99361418a043f546f5eaed54746d6abc",
"text": "Non-negative Matrix Factorization (NMF) and Probabilistic Latent Semantic Indexing (PLSI) have been successfully applied to document clustering recently. In this paper, we show that PLSI and NMF (with the I-divergence objective function) optimize the same objective function, although PLSI and NMF are different algorithms as verified by experiments. This provides a theoretical basis for a new hybrid method that runs PLSI and NMF alternatively, each jumping out of local minima of the other method successively, thus achieving a better final solution. Extensive experiments on five real-life datasets show relations between NMF and PLSI, and indicate the hybrid method leads to significant improvements over NMFonly or PLSI-only methods. We also show that at first order approximation, NMF is identical to χ-statistic.",
"title": ""
},
{
"docid": "84d28257f98ec1d78dcdfbdd7ec17e78",
"text": "True gender self child therapy is based on the premise of gender as a web that weaves together nature, nurture, and culture and allows for a myriad of healthy gender outcomes. This article presents concepts of true gender self, false gender self, and gender creativity as they operationalize in clinical work with children who need therapeutic supports to establish an authentic gender self while developing strategies for negotiating an environment resistant to that self. Categories of gender nonconforming children are outlined and excerpts of a treatment of a young transgender child are presented to illustrate true gender self child therapy.",
"title": ""
},
{
"docid": "c4caa735537ccd82c83a330fa85e142d",
"text": "We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. To this end, we introduce a new way to fuse modality-specific product embeddings into a joint product embedding, in order to leverage both product content information, such as textual descriptions and images, and product collaborative filtering signal. By introducing the fusion step at the very end of our architecture, we are able to train each modality separately, allowing us to keep a modular architecture that is preferable in real-world recommendation deployments. We analyze our performance on normal and hard recommendation setups such as cold-start and cross-category recommendations and achieve good performance on a large product shopping dataset.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "245371dccf75c8982f77c4d48d84d370",
"text": "This paper addresses the problem of streaming packetized media over a lossy packet network in a rate-distortion optimized way. We show that although the data units in a media presentation generally depend on each other according to a directed acyclic graph, the problem of rate-distortion optimized streaming of an entire presentation can be reduced to the problem of error-cost optimized transmission of an isolated data unit. We show how to solve the latter problem in a variety of scenarios, including the important common scenario of sender-driven streaming with feedback over a best-effort network, which we couch in the framework of Markov decision processes. We derive a fast practical algorithm for nearly optimal streaming in this scenario, and we derive a general purpose iterative descent algorithm for locally optimal streaming in arbitrary scenarios. Experimental results show that systems based on our algorithms have steady-state gains of 2-6 dB or more over systems that are not rate-distortion optimized. Furthermore, our systems essentially achieve the best possible performance: the operational distortion-rate function of the source at the capacity of the packet erasure channel.",
"title": ""
},
{
"docid": "c19b63a2c109c098c22877bcba8690ae",
"text": "A monolithic current-mode pulse width modulation (PWM) step-down dc-dc converter with 96.7% peak efficiency and advanced control and protection circuits is presented in this paper. The high efficiency is achieved by \"dynamic partial shutdown strategy\" which enhances circuit speed with less power consumption. Automatic PWM and \"pulse frequency modulation\" switching boosts conversion efficiency during light load operation. The modified current sensing circuit and slope compensation circuit simplify the current-mode control circuit and enhance the response speed. A simple high-speed over-current protection circuit is proposed with the modified current sensing circuit. The new on-chip soft-start circuit prevents the power on inrush current without additional off-chip components. The dc-dc converter has been fabricated with a 0.6 mum CMOS process and measured 1.35 mm2 with the controller measured 0.27 mm2. Experimental results show that the novel on-chip soft-start circuit with longer than 1.5 ms soft-start time suppresses the power-on inrush current. This converter can operate at 1.1 MHz with supply voltage from 2.2 to 6.0 V. Measured power efficiency is 88.5-96.7% for 0.9 to 800 mA output current and over 85.5% for 1000 mA output current.",
"title": ""
},
{
"docid": "8addf385803074288c1a07df92ed1b9f",
"text": "In a permanent magnet synchronous motor where inductances vary as a function of rotor angle, the 2 phase (d-q) equivalent circuit model is commonly used for simplicity and intuition. In this article, a two phase model for a PM synchronous motor is derived and the properties of the circuits and variables are discussed in relation to the physical 3 phase entities. Moreover, the paper suggests methods of obtaining complete model parameters from simple laboratory tests. Due to the lack of developed procedures in the past, obtaining model parameters were very difficult and uncertain, because some model parameters are not directly measurable and vary depending on the operating conditions. Formulation is mainly for interior permanent magnet synchronous motors but can also be applied to surface permanent magnet motors.",
"title": ""
},
{
"docid": "e94c6f4f6336fd244f99071b97388b99",
"text": "While CubeSats have thus far been used exclusively in Low Earth Orbit (LEO), NASA is now investigating the possibility to deploy CubeSats beyond LEO to carry out scientific experiments in Deep Space. Such CubeSats require a high-gain antenna that fits in a constrained and limited volume. This paper introduces a 42.8 dBi gain deployable Ka-band antenna folding in a 1.5U stowage volume suitable for 3U and 6U class CubeSats.",
"title": ""
},
{
"docid": "0aefbe4b8d84c1d5829571ae61ca091a",
"text": "More than 30% of U.S. adults report having experienced low back pain within the preceding three months. Although most low back pain is nonspecific and self-limiting, a subset of patients develop chronic low back pain, defined as persistent symptoms for longer than three months. Low back pain is categorized as nonspecific low back pain without radiculopathy, low back pain with radicular symptoms, or secondary low back pain with a spinal cause. Imaging should be reserved for patients with red flags for cauda equina syndrome, recent trauma, risk of infection, or when warranted before treatment (e.g., surgical, interventional). Prompt recognition of cauda equina syndrome is critical. Patient education should be combined with evidence-guided pharmacologic therapy. Goals of therapy include reducing the severity of pain symptoms, pain interference, and disability, as well as maximizing activity. Validated tools such as the Oswestry Disability Index can help assess symptom severity and functional change in patients with chronic low back pain. Epidural steroid injections do not improve pain or disability in patients with spinal stenosis. Spinal manipulation therapy produces small benefits for up to six months. Because long-term data are lacking for spinal surgery, patient education about realistic outcome expectations is essential.",
"title": ""
},
{
"docid": "c432a44e48e777a7a3316c1474f0aa12",
"text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.",
"title": ""
},
{
"docid": "42520b1cfaec4a5f890f7f0845d5459b",
"text": "Class imbalance problem is quite pervasive in our nowadays human practice. This problem basically refers to the skewness in the data underlying distribution which, in turn, imposes many difficulties on typical machine learning algorithms. To deal with the emerging issues arising from multi-class skewed distributions, existing efforts are mainly divided into two categories: model-oriented solutions and data-oriented techniques. Focusing on the latter, this paper presents a new over-sampling technique which is inspired by Mahalanobis distance. The presented over-sampling technique, called MDO (Mahalanobis Distance-based Over-sampling technique), generates synthetic samples which have the same Mahalanobis distance from the considered class mean as other minority class examples. By preserving the covariance structure of the minority class instances and intelligently generating synthetic samples along the probability contours, new minority class instances are modelled better for learning algorithms. Moreover, MDO can reduce the risk of overlapping between different class regions which are considered as a serious challenge in multi-class problems. Our theoretical analyses and empirical observations across wide spectrum multi-class imbalanced benchmarks indicate that MDO is the method of choice by offering statistical superior MAUC and precision compared to the popular over-sampling techniques.",
"title": ""
},
{
"docid": "6022465cd0dd5412281abb6c7a63c31c",
"text": "Twenty native Korean-speaking subjects heard 22 English word-initial consonants in three vowel contexts produced by three native English talkers. The subjects orthographically labeled each English consonant as the closest Korean consonant. They then judged how similar the English consonant was to the Korean consonant on a scale of 1 to 5. Some English consonants were labeled consistently as a single Korean consonant and judged to be very similar. Other English consonants were labeled consistently as a single Korean consonant but judged to be less similar. Still other English consonants were inconsistently labeled. Korean acoustic cues, vowel context, and token differences appeared to influence labeling choices.",
"title": ""
},
{
"docid": "7bf137d513e7a310e121eecb5f59ae27",
"text": "BACKGROUND\nChildren with intellectual disability are at heightened risk for behaviour problems and diagnosed mental disorder.\n\n\nMETHODS\nThe present authors studied the early manifestation and continuity of problem behaviours in 205 pre-school children with and without developmental delays.\n\n\nRESULTS\nBehaviour problems were quite stable over the year from age 36-48 months. Children with developmental delays were rated higher on behaviour problems than their non-delayed peers, and were three times as likely to score in the clinical range. Mothers and fathers showed high agreement in their rating of child problems, especially in the delayed group. Parenting stress was also higher in the delayed group, but was related to the extent of behaviour problems rather than to the child's developmental delay.\n\n\nCONCLUSIONS\nOver time, a transactional model fit the relationship between parenting stress and behaviour problems: high parenting stress contributed to a worsening in child behaviour problems over time, and high child behaviour problems contributed to a worsening in parenting stress. Findings for mothers and fathers were quite similar.",
"title": ""
}
] |
scidocsrr
|
7d32406f6284e6ea25f9994314f4ad38
|
Reported maternal tendencies predict the reward value of infant facial cuteness, but not cuteness detection.
|
[
{
"docid": "6a0c269074d80f26453d1fec01cafcec",
"text": "Advances in neurobiology permit neuroscientists to manipulate specific brain molecules, neurons and systems. This has lead to major advances in the neuroscience of reward. Here, it is argued that further advances will require equal sophistication in parsing reward into its specific psychological components: (1) learning (including explicit and implicit knowledge produced by associative conditioning and cognitive processes); (2) affect or emotion (implicit 'liking' and conscious pleasure) and (3) motivation (implicit incentive salience 'wanting' and cognitive incentive goals). The challenge is to identify how different brain circuits mediate different psychological components of reward, and how these components interact.",
"title": ""
}
] |
[
{
"docid": "6554f662f667b8b53ad7b75abfa6f36f",
"text": "present paper introduces an innovative approach to automatically grade the disease on plant leaves. The system effectively inculcates Information and Communication Technology (ICT) in agriculture and hence contributes to Precision Agriculture. Presently, plant pathologists mainly rely on naked eye prediction and a disease scoring scale to grade the disease. This manual grading is not only time consuming but also not feasible. Hence the current paper proposes an image processing based approach to automatically grade the disease spread on plant leaves by employing Fuzzy Logic. The results are proved to be accurate and satisfactory in contrast with manual grading. Keywordscolor image segmentation, disease spot extraction, percent-infection, fuzzy logic, disease grade. INTRODUCTION The sole area that serves the food needs of the entire human race is the Agriculture sector. It has played a key role in the development of human civilization. Plants exist everywhere we live, as well as places without us. Plant disease is one of the crucial causes that reduces quantity and degrades quality of the agricultural products. Plant Pathology is the scientific study of plant diseases caused by pathogens (infectious diseases) and environmental conditions (physiological factors). It involves the study of pathogen identification, disease etiology, disease cycles, economic impact, plant disease epidemiology, plant disease resistance, pathosystem genetics and management of plant diseases. Disease is impairment to the normal state of the plant that modifies or interrupts its vital functions such as photosynthesis, transpiration, pollination, fertilization, germination etc. Plant diseases have turned into a nightmare as it can cause significant reduction in both quality and quantity of agricultural products [2]. Information and Communication Technology (ICT) application is going to be implemented as a solution in improving the status of the agriculture sector [3]. Due to the manifestation and developments in the fields of sensor networks, robotics, GPS technology, communication systems etc, precision agriculture started emerging [10]. The objectives of precision agriculture are profit maximization, agricultural input rationalization and environmental damage reduction by adjusting the agricultural practices to the site demands. In the area of disease management, grade of the disease is determined to provide an accurate and precision treatment advisory. EXISTING SYSTEM: MANUAL GRADING Presently the plant pathologists mainly rely on the naked eye prediction and a disease scoring scale to grade the disease on leaves. There are some problems associated with this manual grading. Diseases are inevitable in plants. When a plant gets affected by the disease, a treatment advisory is required to cure the Arun Kumar R et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1709-1716 IJCTA | SEPT-OCT 2011 Available online@www.ijcta.com 1709 ISSN:2229-6093",
"title": ""
},
{
"docid": "cb011c7e0d4d5f6d05e28c07ff02e18b",
"text": "The legendary wealth in gold of ancient Egypt seems to correspond with an unexpected high number of gold production sites in the Eastern Desert of Egypt and Nubia. This contribution introduces briefly the general geology of these vast regions and discusses the geology of the different varieties of the primary gold occurrences (always related to auriferous quartz mineralization in veins or shear zones) as well as the variable physico-chemical genesis of the gold concentrations. The development of gold mining over time, from Predynastic (ca. 3000 BC) until the end of Arab gold production times (about 1350 AD), including the spectacular Pharaonic periods is outlined, with examples of its remaining artefacts, settlements and mining sites in remote regions of the Eastern Desert of Egypt and Nubia. Finally, some estimates on the scale of gold production are presented. 2002 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "cf3804e332e9bec1120261f9e4f98da8",
"text": "We propose Bilingually-constrained Recursive Auto-encoders (BRAE) to learn semantic phrase embeddings (compact vector representations for phrases), which can distinguish the phrases with different semantic meanings. The BRAE is trained in a way that minimizes the semantic distance of translation equivalents and maximizes the semantic distance of nontranslation pairs simultaneously. After training, the model learns how to embed each phrase semantically in two languages and also learns how to transform semantic embedding space in one language to the other. We evaluate our proposed method on two end-to-end SMT tasks (phrase table pruning and decoding with phrasal semantic similarities) which need to measure semantic similarity between a source phrase and its translation candidates. Extensive experiments show that the BRAE is remarkably effective in these two tasks.",
"title": ""
},
{
"docid": "b6161c07694b61e0cafdfec24914af61",
"text": "A noninverting buck-boost dc-dc converter can work in the buck, boost, or buck-boost mode, but it has been analyzed that operating in the buck-boost mode has the lowest efficiency. However, if the buck-boost mode is excluded, the converter may jump between the buck and boost modes when the input voltage approaches the output voltage. This transition region is called the mixed mode, and larger output voltage ripples are expectable. In this brief, the conditions for the converter to operate in the mixed mode are analyzed, including the impact of mismatches between ramp signals, and a ramp generator is designed accordingly. Moreover, a full-cycle current-sensing circuit is proposed, and it can effectively inhibit the switching noise on the sensed current signal. The proposed chip was fabricated by the 0.35-μm 2P4M 3.3-V/5-V mixed-signal polycide process. The maximal measured efficiency is 93.5%.",
"title": ""
},
{
"docid": "3e62ac4e3476cc2999808f0a43a24507",
"text": "We present a detailed description of a new Bioconductor package, phyloseq, for integrated data and analysis of taxonomically-clustered phylogenetic sequencing data in conjunction with related data types. The phyloseq package integrates abundance data, phylogenetic information and covariates so that exploratory transformations, plots, and confirmatory testing and diagnostic plots can be carried out seamlessly. The package is built following the S4 object-oriented framework of the R language so that once the data have been input the user can easily transform, plot and analyze the data. We present some examples that highlight the methods and the ease with which we can leverage existing packages.",
"title": ""
},
{
"docid": "c3e7a2d7689ef31140b44d4acdc196c3",
"text": "Path planning for autonomous vehicles in dynamic environments is an important but challenging problem, due to the constraints of vehicle dynamics and existence of surrounding vehicles. Typical trajectories of vehicles involve different modes of maneuvers, including lane keeping, lane change, ramp merging, and intersection crossing. There exist prior arts using the rule-based high-level decision making approaches to decide the mode switching. Instead of using explicit rules, we propose a unified path planning approach using Model Predictive Control (MPC), which automatically decides the mode of maneuvers. To ensure safety, we model surrounding vehicles as polygons and develop a type of constraints in MPC to enforce the collision avoidance between the ego vehicle and surrounding vehicles. To achieve comfortable and natural maneuvers, we include a lane-associated potential field in the objective function of the MPC. We have simulated the proposed method in different test scenarios and the results demonstrate the effectiveness of the proposed approach in automatically generating reasonable maneuvers while guaranteeing the safety of the autonomous vehicle.",
"title": ""
},
{
"docid": "384e5120263f0e4e1f463aebeb35552c",
"text": "Automotive radar systems are starting to divide into two groups: Highly specialized stand alone low-cost sensors targeting high volume markets and high-performance multi purpose sensors used in sophisticated data fusion architectures. While the first group focuses on ultimate cost reduction, the latter provides the basis for future high-performance driver assistance functions which might be deployed in middle- to luxury-class cars. This paper addresses both sensor groups and lists OEM requirements resulting from the specifications of future driver assistance functions and from upcoming perception system architectures. The second part of this contribution addresses recent trends in automotive radar sensing technology.",
"title": ""
},
{
"docid": "d2343666a57124cca836ad9a5d784d5b",
"text": "In order to further advance research within management accounting and integrated information systems (IIS), an understanding of what research has already been done and what research is needed is of particular importance. The purpose of this paper is to uncover, classify and interpret current research within management accounting and IIS. This is done partly to identify research gaps and propose directions for future research and partly to guide researchers and practitioners investigating and making decisions on how to better synthesise the two areas. Based on the strengths of existing frameworks covering elements of management accounting and IIS a new and more comprehensive theoretical framework is developed. This is used as a basis for classifying and presentation of the reviewed literature in structured form. The outcome of the review is an identification of research gaps and a proposal of research opportunities within different research paradigms and with the use of different methods. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1d2f72587e694aa8d6435e176e87d4cb",
"text": "It is well known that the performance of context-based image processing systems can be improved by allowing the processor (e.g., an encoder or a denoiser) a delay of several samples before making a processing decision. Often, however, for such systems, traditional delayed-decision algorithms can become computationally prohibitive due to the growth in the size of the space of possible solutions. In this paper, we propose a reduced-complexity, one-pass, delayed-decision algorithm that systematically reduces the size of the search space, while also preserving its structure. In particular, we apply the proposed algorithm to two examples of adaptive context-based image processing systems, an image coding system that employs a context-based entropy coder, and a spatially adaptive image-denoising system. For these two types of widely used systems, we show that the proposed delayed-decision search algorithm outperforms instantaneous-decision algorithms with only a small increase in complexity. We also show that the performance of the proposed algorithm is better than that of other, higher complexity, delayed-decision algorithms.",
"title": ""
},
{
"docid": "9dcb0d6ed9c660bb8ac0af5964d3b428",
"text": "I develop a compositional theory of refinement for the branching time framework based on stuttering simulation and prove that if one system refines another, then a refinement map always exists. The existence of refinement maps in the linear time framework was studied in an influential paper by Abadi and Lamport. My interest in proving analogous results for the branching time framework arises from the observation that in the context of mechanical verification, branching time has some important advantages. By setting up the refinement problem in a way that differs from the Abadi and Lamport approach, I obtain a proof of the existence of refinement maps (in the branching time framework) that does not depend on any of the conditions found in the work of Abadi and Lamport e.g., machine closure, finite invisible nondeterminism, internal continuity, the use of history and prophecy variables, etc. A direct consequence is that refinement maps always exist in the linear time framework, subject only to the use of prophecy-like variables.",
"title": ""
},
{
"docid": "3cdbc153caaafcea54228b0c847aa536",
"text": "BACKGROUND\nAlthough the use of filling agents for soft-tissue augmentation has increased worldwide, most consensus statements do not distinguish between ethnic populations. There are, however, significant differences between Caucasian and Asian faces, reflecting not only cultural disparities, but also distinctive treatment goals. Unlike aesthetic patients in the West, who usually seek to improve the signs of aging, Asian patients are younger and request a broader range of indications.\n\n\nMETHODS\nMembers of the Asia-Pacific Consensus group-comprising specialists from the fields of dermatology, plastic surgery, anatomy, and clinical epidemiology-convened to develop consensus recommendations for Asians based on their own experience using cohesive polydensified matrix, hyaluronic acid, and calcium hydroxylapatite fillers.\n\n\nRESULTS\nThe Asian face demonstrates differences in facial structure and cosmetic ideals. Improving the forward projection of the \"T zone\" (i.e., forehead, nose, cheeks, and chin) forms the basis of a safe and effective panfacial approach to the Asian face. Successful augmentation may be achieved with both (1) high- and low-viscosity cohesive polydensified matrix/hyaluronic acid and (2) calcium hydroxylapatite for most indications, although some constraints apply.\n\n\nCONCLUSION\nThe Asia-Pacific Consensus recommendations are the first developed specifically for the use of fillers in Asian populations.\n\n\nCLINCIAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "ff9e0e5c2bb42955d3d29db7809414a1",
"text": "We present a novel methodology for the automated detection of breast lesions from dynamic contrast-enhanced magnetic resonance volumes (DCE-MRI). Our method, based on deep reinforcement learning, significantly reduces the inference time for lesion detection compared to an exhaustive search, while retaining state-of-art accuracy. This speed-up is achieved via an attention mechanism that progressively focuses the search for a lesion (or lesions) on the appropriate region(s) of the input volume. The attention mechanism is implemented by training an artificial agent to learn a search policy, which is then exploited during inference. Specifically, we extend the deep Q-network approach, previously demonstrated on simpler problems such as anatomical landmark detection, in order to detect lesions that have a significant variation in shape, appearance, location and size. We demonstrate our results on a dataset containing 117 DCE-MRI volumes, validating run-time and accuracy of lesion detection.",
"title": ""
},
{
"docid": "144c11393bef345c67595661b5b20772",
"text": "BACKGROUND\nAppropriate placement of the bispectral index (BIS)-vista montage for frontal approach neurosurgical procedures is a neuromonitoring challenge. The standard bifrontal application interferes with the operative field; yet to date, no other placements have demonstrated good agreement. The purpose of our study was to compare the standard BIS montage with an alternate BIS montage across the nasal dorsum for neuromonitoring.\n\n\nMATERIALS AND METHODS\nThe authors performed a prospective study, enrolling patients and performing neuromonitoring using both the standard and the alternative montage on each patient. Data from the 2 placements were compared and analyzed using a Bland-Altman analysis, a Scatter plot analysis, and a matched-pair analysis.\n\n\nRESULTS\nOverall, 2567 minutes of data from each montage was collected on 28 subjects. Comparing the overall difference in score, the alternate BIS montage score was, on average, 2.0 (6.2) greater than the standard BIS montage score (P<0.0001). The Bland-Altman analysis revealed a difference in score of -2.0 (95% confidence interval, -14.1, 10.1), with 108/2567 (4.2%) of the values lying outside of the limit of agreement. The scatter plot analysis overall produced a trend line with the equation y=0.94x+0.82, with an R coefficient of 0.82.\n\n\nCONCLUSIONS\nWe determined that the nasal montage produces values that have slightly more variability compared with that ideally desired, but the variability is not clinically significant. In cases where the standard BIS-vista montage would interfere with the operative field, an alternative positioning of the BIS montage across the nasal bridge and under the eye can be used.",
"title": ""
},
{
"docid": "7d49b6b4f1129cf55fc3c8af4f5dfe8f",
"text": "Staggering growth levels in the number of mobile devices and amount of mobile Internet usage has caused network providers to move away from unlimited data plans to less flexible charging models. As a result, users are being required to pay more for short accesses or underutilize a longer-term data plan. In this paper, we propose CrowdMAC, a crowdsourcing approach in which mobile users create a marketplace for mobile Internet access. Mobile users with residue capacity in their data plans share their access with other nearby mobile users for a small fee. CrowdMAC is implemented as a middleware framework with incentivebased mechanisms for admission control, service selection, and mobility management. CrowdMAC is implemented and evaluated on a testbed of Android phones and in the well known Qualnet simulator. Our evaluation results show that CrowdMAC: (i) effectively exercises the trade-off between revenue and transfer delay, (ii) adequately satisfies user-specified (delay) quality levels, and (iii) properly adapts to device mobility and achieves performance very close to the ideal case (upper bound).",
"title": ""
},
{
"docid": "12af7a639f885a173950304cf44b5a42",
"text": "Objective:To compare fracture rates in four diet groups (meat eaters, fish eaters, vegetarians and vegans) in the Oxford cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC-Oxford).Design:Prospective cohort study of self-reported fracture risk at follow-up.Setting:The United Kingdom.Subjects:A total of 7947 men and 26 749 women aged 20–89 years, including 19 249 meat eaters, 4901 fish eaters, 9420 vegetarians and 1126 vegans, recruited by postal methods and through general practice surgeries.Methods:Cox regression.Results:Over an average of 5.2 years of follow-up, 343 men and 1555 women reported one or more fractures. Compared with meat eaters, fracture incidence rate ratios in men and women combined adjusted for sex, age and non-dietary factors were 1.01 (95% CI 0.88–1.17) for fish eaters, 1.00 (0.89–1.13) for vegetarians and 1.30 (1.02–1.66) for vegans. After further adjustment for dietary energy and calcium intake the incidence rate ratio among vegans compared with meat eaters was 1.15 (0.89–1.49). Among subjects consuming at least 525 mg/day calcium the corresponding incidence rate ratios were 1.05 (0.90–1.21) for fish eaters, 1.02 (0.90–1.15) for vegetarians and 1.00 (0.69–1.44) for vegans.Conclusions:In this population, fracture risk was similar for meat eaters, fish eaters and vegetarians. The higher fracture risk in the vegans appeared to be a consequence of their considerably lower mean calcium intake. An adequate calcium intake is essential for bone health, irrespective of dietary preferences.Sponsorship:The EPIC-Oxford study is supported by The Medical Research Council and Cancer Research UK.",
"title": ""
},
{
"docid": "a5f78c3708a808fd39c4ced6152b30b8",
"text": "Building ontology for wireless network intrusion detection is an emerging method for the purpose of achieving high accuracy, comprehensive coverage, self-organization and flexibility for network security. In this paper, we leverage the power of Natural Language Processing (NLP) and Crowdsourcing for this purpose by constructing lightweight semi-automatic ontology learning framework which aims at developing a semantic-based solution-oriented intrusion detection knowledge map using documents from Scopus. Our proposed framework uses NLP as its automatic component and Crowdsourcing is applied for the semi part. The main intention of applying both NLP and Crowdsourcing is to develop a semi-automatic ontology learning method in which NLP is used to extract and connect useful concepts while in uncertain cases human power is leveraged for verification. This heuristic method shows a theoretical contribution in terms of lightweight and timesaving ontology learning model as well as practical value by providing solutions for detecting different types of intrusions.",
"title": ""
},
{
"docid": "b08023089abd684d26fabefb038cc9fa",
"text": "IMSI catching is a problem on all generations of mobile telecommunication networks, i.e., 2G (GSM, GPRS), 3G (HDSPA, EDGE, UMTS) and 4G (LTE, LTE+). Currently, the SIM card of a mobile phone has to reveal its identity over an insecure plaintext transmission, before encryption is enabled. This identifier (the IMSI) can be intercepted by adversaries that mount a passive or active attack. Such identity exposure attacks are commonly referred to as 'IMSI catching'. Since the IMSI is uniquely identifying, unauthorized exposure can lead to various location privacy attacks. We propose a solution, which essentially replaces the IMSIs with changing pseudonyms that are only identifiable by the home network of the SIM's own network provider. Consequently, these pseudonyms are unlinkable by intermediate network providers and malicious adversaries, and therefore mitigate both passive and active attacks, which we also formally verified using ProVerif. Our solution is compatible with the current specifications of the mobile standards and therefore requires no change in the infrastructure or any of the already massively deployed network equipment. The proposed method only requires limited changes to the SIM and the authentication server, both of which are under control of the user's network provider. Therefore, any individual (virtual) provider that distributes SIM cards and controls its own authentication server can deploy a more privacy friendly mobile network that is resilient against IMSI catching attacks.",
"title": ""
},
{
"docid": "69e38ee6f8042fb0232b3e405afd2602",
"text": "We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network.",
"title": ""
},
{
"docid": "a7287ea0f78500670fb32fc874968c54",
"text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.",
"title": ""
},
{
"docid": "3f629998235c1cfadf67cf711b07f8b9",
"text": "The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.",
"title": ""
}
] |
scidocsrr
|
2fc1c3d9d5b302e82ab59834f7fedb89
|
Artificial Intelligence in Hypertension Diagnosis : A Review
|
[
{
"docid": "ba850aaec32b6ddc6eba23973d1e1608",
"text": "Data mining techniques have been widely used in clinical decision support systems for prediction and diagnosis of various diseases with good accuracy. These techniques have been very effective in designing clinical support systems because of their ability to discover hidden patterns and relationships in medical data. One of the most important applications of such systems is in diagnosis of heart diseases because it is one of the leading causes of deaths all over the world. Almost all systems that predict heart diseases use clinical dataset having parameters and inputs from complex tests conducted in labs. None of the system predicts heart diseases based on risk factors such as age, family history, diabetes, hypertension, high cholesterol, tobacco smoking, alcohol intake, obesity or physical inactivity, etc. Heart disease patients have lot of these visible risk factors in common which can be used very effectively for diagnosis. System based on such risk factors would not only help medical professionals but it would give patients a warning about the probable presence of heart disease even before he visits a hospital or goes for costly medical checkups. Hence this paper presents a technique for prediction of heart disease using major risk factors. This technique involves two most successful data mining tools, neural networks and genetic algorithms. The hybrid system implemented uses the global optimization advantage of genetic algorithm for initialization of neural network weights. The learning is fast, more stable and accurate as compared to back propagation. The system was implemented in Matlab and predicts the risk of heart disease with an accuracy of 89%.",
"title": ""
}
] |
[
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "19222de066550a2d27fc81b12c020d51",
"text": "Our purpose in this research is to develop a methodology to automatically and efficiently classify web images as UML static diagrams, and to produce a computer tool that implements this function. The tool receives as input a bitmap file (in different formats) and tells whether the image corresponds to a diagram. The tool does not require that the images are explicitly or implicitly tagged as UML diagrams. The tool extracts graphical characteristics from each image (such as grayscale histogram, color histogram and elementary geometric forms) and uses a combination of rules to classify it. The rules are obtained with machine learning techniques (rule induction) from a sample of 19000 web images manually classified by experts. In this work we do not consider the textual contents of the images.",
"title": ""
},
{
"docid": "f4a0738d814e540f7c208ab1e3666fb7",
"text": "In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic. Preprint for the 31st International Conference on Machine Learning (ICML 2014) 1 ar X iv :1 31 1. 48 25 v3 [ st at .M L ] 8 J un 2 01 5 Erratum After the publication of our article, we found an error in the proof of Lemma 1 which invalidates the main theorem. It appears that the information given to the algorithm is not sufficient for the main theorem to hold true. The theoretical guarantees would remain valid in a setting where the algorithm observes the instantaneous regret instead of noisy samples of the unknown function. We describe in this page the mistake and its consequences. Let f : X → R be the unknown function to be optimized, which is a sample from a Gaussian process. Let’s fix x, x1, . . . , xT ∈ X and the observations yt = f(xt)+ t where the noise variables t are independent Gaussian noise N (0, σ). We define the instantaneous regret rt = f(x?)− f(xt) and, MT = T ∑",
"title": ""
},
{
"docid": "0add9f22db24859da50e1a64d14017b9",
"text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.",
"title": ""
},
{
"docid": "f4d44bbbb5bc6ff2a8128ba50b4c8aaa",
"text": "In order to obtain a temperature range of a pasteurization process, a good controller that can reject the unidentified disturbance which may occur at any time is needed. In this paper, control structure of both multi-loop and cascade controllers are designed for a pasteurization mini plant Armfied PCT23 MKIL The control algorithm uses proportional-integral-derivative (PID) controller. Some tuning methods are simulated to obtain the best controller performance. The two controllers are simulated and tested on real plant and their performances are compared. From experiments, it is found that the multiloop controller has a superior set point tracking performance whereas the cascade controller is better for disturbance rejection.",
"title": ""
},
{
"docid": "e35f6f4e7b6589e992ceeccb4d25c9f1",
"text": "One of the key success factors of lending organizations in general and banks in particular is the assessment of borrower credit worthiness in advance during the credit evaluation process. Credit scoring models have been applied by many researchers to improve the process of assessing credit worthiness by differentiating between prospective loans on the basis of the likelihood of repayment. Thus, credit scoring is a very typical Data Mining (DM) classification problem. Many traditional statistical and modern computational intelligence techniques have been presented in the literature to tackle this problem. The main objective of this paper is to describe an experiment of building suitable Credit Scoring Models (CSMs) for the Sudanese banks. Two commonly discussed data mining classification techniques are chosen in this paper namely: Decision Tree (DT) and Artificial Neural Networks (ANN). In addition Genetic Algorithms (GA) and Principal Component Analysis (PCA) are also applied as feature selection techniques. In addition to a Sudanese credit dataset, German credit dataset is also used to evaluate these techniques. The results reveal that ANN models outperform DT models in most cases. Using GA as a feature selection is more effective than PCA technique. The highest accuracy of German data set (80.67%) and Sudanese credit scoring models (69.74%) are achieved by a hybrid GA-ANN model. Although DT and its hybrid models (PCA-DT, GA-DT) are outperformed by ANN and its hybrid models (PCA-ANN, GA-ANN) in most cases, they produced interpretable loan granting decisions.",
"title": ""
},
{
"docid": "9ac16df20364b0ae28d3164bbfb08654",
"text": "Complex event detection is an advanced form of data stream processing where the stream(s) are scrutinized to identify given event patterns. The challenge for many complex event processing (CEP) systems is to be able to evaluate event patterns on high-volume data streams while adhering to realtime constraints. To solve this problem, in this paper we present a hardware based complex event detection system implemented on field-programmable gate arrays (FPGAs). By inserting the FPGA directly into the data path between the network interface and the CPU, our solution can detect complex events at gigabit wire speed with constant and fully predictable latency, independently of network load, packet size or data distribution. This is a significant improvement over CPU based systems and an architectural approach that opens up interesting opportunities for hybrid stream engines that combine the flexibility of the CPU with the parallelism and processing power of FPGAs.",
"title": ""
},
{
"docid": "b63ef33cde2d725944f2fa249e48b9f8",
"text": "We introduce eyeglasses that present haptic feedback when using gaze gestures for input. The glasses utilize vibrotactile actuators to provide gentle stimulation to three locations on the user's head. We describe two initial user studies that were conducted to evaluate the easiness of recognizing feedback locations and participants' preferences for combining the feedback with gaze gestures. The results showed that feedback from a single actuator was the easiest to recognize and also preferred when used with gaze gestures. We conclude by presenting future use scenarios that could benefit from gaze gestures and haptic feedback.",
"title": ""
},
{
"docid": "a9e454767906f4ced5876ee73f3a4671",
"text": "Smart solutions for water quality monitoring are gaining importance with advancement in communication technology. This paper presents a detailed overview of recent works carried out in the field of smart water quality monitoring. Also, a power efficient, simpler solution for in-pipe water quality monitoring based on Internet of Things technology is presented. The model developed is used for testing water samples and the data uploaded over the Internet are analyzed. The system also provides an alert to a remote user, when there is a deviation of water quality parameters from the pre-defined set of standard values.",
"title": ""
},
{
"docid": "80ae8494ba7ebc70e9454d68f4dc5cbd",
"text": "Advanced deep learning methods have been developed to conduct prostate MR volume segmentation in either a 2D or 3D fully convolutional manner. However, 2D methods tend to have limited segmentation performance, since large amounts of spatial information of prostate volumes are discarded during the slice-by-slice segmentation process; and 3D methods also have room for improvement, since they use isotropic kernels to perform 3D convolutions whereas most prostate MR volumes have anisotropic spatial resolution. Besides, the fully convolutional structural methods achieve good performance for localization issues but neglect the per-voxel classification for segmentation tasks. In this paper, we propose a 3D Global Convolutional Adversarial Network (3D GCA-Net) to address efficient prostate MR volume segmentation. We first design a 3D ResNet encoder to extract 3D features from prostate scans, and then develop the decoder, which is composed of a multi-scale 3D global convolutional block and a 3D boundary refinement block, to address the classification and localization issues simultaneously for volumetric segmentation. Additionally, we combine the encoder-decoder segmentation network with an adversarial network in the training phrase to enforce the contiguity of long-range spatial predictions. Throughout the proposed model, we use anisotropic convolutional processing for better feature learning on prostate MR scans. We evaluated our 3D GCA-Net model on two public prostate MR datasets and achieved state-of-the-art performances.",
"title": ""
},
{
"docid": "6816bb15dba873244306f22207525bee",
"text": "Imbalance suggests a feeling of dynamism and movement in static objects. It is therefore not surprising that many 3D models stand in impossibly balanced configurations. As long as the models remain in a computer this is of no consequence: the laws of physics do not apply. However, fabrication through 3D printing breaks the illusion: printed models topple instead of standing as initially intended. We propose to assist users in producing novel, properly balanced designs by interactively deforming an existing model. We formulate balance optimization as an energy minimization, improving stability by modifying the volume of the object, while preserving its surface details. This takes place during interactive editing: the user cooperates with our optimizer towards the end result. We demonstrate our method on a variety of models. With our technique, users can produce fabricated objects that stand in one or more surprising poses without requiring glue or heavy pedestals.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "e9bc802e8ce6a823526084c82aa89c95",
"text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward 5G. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in LTE /LTE-Advanced systems. Thus, it is of great interest to study how to efficiently and effectively combine NOMA and SU-MIMO techniques together for further system performance improvement. This paper investigates the combination of NOMA with open-loop and closed-loop SU-MIMO. The key issues involved in the combination are presented and discussed, including scheduling algorithm, successive interference canceller (SIC) order determination, transmission power assignment and feedback design. The performances of NOMA with SU-MIMO are investigated by system-level simulations with very practical assumptions. Simulation results show that compared to orthogonal multiple access system, NOMA can achieve large performance gains both open-loop and closed-loop SU-MIMO, which are about 23% for cell average throughput and 33% for cell-edge user throughput.",
"title": ""
},
{
"docid": "1dd15eb76573cb6362e9efde9f5631e5",
"text": "Research on API migration and language conversion can be informed by empirical data about API usage. For instance, such data may help with designing and defending mapping rules for API migration in terms of relevance and applicability. We describe an approach to large-scale API-usage analysis of open-source Java projects, which we also instantiate for the Source-Forge open-source repository in a certain way. Our approach covers checkout, building, tagging with metadata, fact extraction, analysis, and synthesis with a large degree of automation. Fact extraction relies on resolved (type-checked) ASTs. We describe a few examples of API-usage analysis; they are motivated by API migration. These examples are concerned with analysing API footprint (such as the numbers of distinct APIs used in a project), API coverage (such as the percentage of methods of an API used in a corpus), and framework-like vs. class-library-like usage.",
"title": ""
},
{
"docid": "77c922c3d2867fa7081a9f18ae0b1151",
"text": "The failure of critical components in industrial systems may have negative consequences on the availability, the productivity, the security and the environment. To avoid such situations, the health condition of the physical system, and particularly of its critical components, can be constantly assessed by using the monitoring data to perform on-line system diagnostics and prognostics. The present paper is a contribution on the assessment of the health condition of a Computer Numerical Control (CNC) tool machine and the estimation of its Remaining Useful Life (RUL). The proposed method relies on two main phases: an off-line phase and an on-line phase. During the first phase, the raw data provided by the sensors are processed to extract reliable features. These latter are used as inputs of learning algorithms in order to generate the models that represent the wear’s behavior of the cutting tool. Then, in the second phase, which is an assessment one, the constructed models are exploited to identify the tool’s current health state, predict its RUL and the associated confidence bounds. The proposed method is applied on a benchmark of condition monitoring data gathered during several cuts of a CNC tool. Simulation results are obtained and discussed at the end of the paper.",
"title": ""
},
{
"docid": "2b51fdb5800a95b31fa5c2cff493ad80",
"text": "An auditory-based feature extraction algorithm is presented. We name the new features as cochlear filter cepstral coefficients (CFCCs) which are defined based on a recently developed auditory transform (AT) plus a set of modules to emulate the signal processing functions in the cochlea. The CFCC features are applied to a speaker identification task to address the acoustic mismatch problem between training and testing environments. Usually, the performance of acoustic models trained in clean speech drops significantly when tested in noisy speech. The CFCC features have shown strong robustness in this kind of situation. In our experiments, the CFCC features consistently perform better than the baseline MFCC features under all three mismatched testing conditions-white noise, car noise, and babble noise. For example, in clean conditions, both MFCC and CFCC features perform similarly, over 96%, but when the signal-to-noise ratio (SNR) of the input signal is 6 dB, the accuracy of the MFCC features drops to 41.2%, while the CFCC features still achieve an accuracy of 88.3%. The proposed CFCC features also compare favorably to perceptual linear predictive (PLP) and RASTA-PLP features. The CFCC features consistently perform much better than PLP. Under white noise, the CFCC features are significantly better than RASTA-PLP, while under car and babble noise, the CFCC features provide similar performances to RASTA-PLP.",
"title": ""
},
{
"docid": "aba7cb0f5f50a062c42b6b51457eb363",
"text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.",
"title": ""
},
{
"docid": "c14da39ea48b06bfb01c6193658df163",
"text": "We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.",
"title": ""
},
{
"docid": "61ad35eaee012d8c1bddcaeee082fa22",
"text": "For realistic simulation it is necessary to thoroughly define and describe light-source characteristics¿especially the light-source geometry and the luminous intensity distribution.",
"title": ""
}
] |
scidocsrr
|
39b87dc932a4fe7cfe907b87517986e1
|
FACE spoofing detection using LDP-TOP
|
[
{
"docid": "d930e323cb7563edce3f7724be98b822",
"text": "Identity spoofing is a contender for high-security face recognition applications. With the advent of social media and globalized search, our face images and videos are wide-spread on the internet and can be potentially used to attack biometric systems without previous user consent. Yet, research to counter these threats is just on its infancy we lack public standard databases, protocols to measure spoofing vulnerability and baseline methods to detect these attacks. The contributions of this work to the area are three-fold: firstly we introduce a publicly available PHOTO-ATTACK database with associated protocols to measure the effectiveness of counter-measures. Based on the data available, we conduct a study on current state-of-the-art spoofing detection algorithms based on motion analysis, showing they fail under the light of these new dataset. By last, we propose a new technique of countermeasure solely based on foreground/background motion correlation using Optical Flow that outperforms all other algorithms achieving nearly perfect scoring with an equal-error rate of 1.52% on the available test data. The source code leading to the reported results is made available for the replicability of findings in this article.",
"title": ""
},
{
"docid": "fe33ff51ca55bf745bdcdf8ee02e2d36",
"text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.",
"title": ""
}
] |
[
{
"docid": "559637a4f8f5b99bb3210c5c7d03d2e0",
"text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "0f8f69a6d9da8cc22c5794a10da8dfba",
"text": "The recently introduced random walker segmentation algorithm by Grady and Funka-Lea (2004) has been shown to have desirable theoretical properties and to perform well on a wide variety of images in practice. However, this algorithm requires user-specified labels and produces a segmentation where each segment is connected to a labeled pixel. We show that incorporation of a nonparametric probability density model allows for an extended random walkers algorithm that can locate disconnected objects and does not require user-specified labels. Finally, we show that this formulation leads to a deep connection with the popular graph cuts method by Boykov et al. (2001) and Wu and Leahy (1993).",
"title": ""
},
{
"docid": "9636c75bdbbd7527abdd8fbac1466d55",
"text": "Predicting the occurrence of a particular event of interest at future time points is the primary goal of survival analysis. The presence of incomplete observations due to time limitations or loss of data traces is known as censoring which brings unique challenges in this domain and differentiates survival analysis from other standard regression methods. The popularly used survival analysis methods such as Cox proportional hazard model and parametric survival regression suffer from some strict assumptions and hypotheses that are not realistic in most of the real-world applications. To overcome the weaknesses of these two types of methods, in this paper, we reformulate the survival analysis problem as a multi-task learning problem and propose a new multi-task learning based formulation to predict the survival time by estimating the survival status at each time interval during the study duration. We propose an indicator matrix to enable the multi-task learning algorithm to handle censored instances and incorporate some of the important characteristics of survival problems such as non-negative non-increasing list structure into our model through max-heap projection. We employ the L2,1-norm penalty which enables the model to learn a shared representation across related tasks and hence select important features and alleviate over-fitting in high-dimensional feature spaces; thus, reducing the prediction error of each task. To efficiently handle the two non-smooth constraints, in this paper, we propose an optimization method which employs Alternating Direction Method of Multipliers (ADMM) algorithm to solve the proposed multi-task learning problem. We demonstrate the performance of the proposed method using real-world microarray gene expression high-dimensional benchmark datasets and show that our method outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "1c068cfb1a801a89ca87a1ac1c279c97",
"text": "The analysis of authorial style, termed stylometry, assumes that style is quantifiably measurable for evaluation of distinctive qualities. Stylometry research has yielded several methods and tools over the past 200 years to handle a variety of challenging cases. This survey reviews several articles within five prominent subtasks: authorship attribution, authorship verification, authorship profiling, stylochronometry, and adversarial stylometry. Discussions on datasets, features, experimental techniques, and recent approaches are provided. Further, a current research challenge lies in the inability of authorship analysis techniques to scale to a large number of authors with few text samples. Here, we perform an extensive performance analysis on a corpus of 1,000 authors to investigate authorship attribution, verification, and clustering using 14 algorithms from the literature. Finally, several remaining research challenges are discussed, along with descriptions of various open-source and commercial software that may be useful for stylometry subtasks.",
"title": ""
},
{
"docid": "6c3be94fe73ef79d711ef5f8b9c789df",
"text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?",
"title": ""
},
{
"docid": "ab32c8e5a5f8f7054d7a820514b1a84b",
"text": "Descriptions and reviews for products abound on the web and characterise the corresponding products through their aspects. Extracting these aspects is essential to better understand these descriptions, e.g., for comparing or recommending products. Current pattern-based aspect extraction approaches focus on flat patterns extracting flat sets of adjective-noun pairs. Aspects also have crucial importance on sentiment classification in which sentiments are matched with aspect-level expressions. A preliminary step in both aspect extraction and aspect based sentiment analysis is to detect aspect terms and opinion targets. In this paper, we propose a sequential learning approach to extract aspect terms and opinion targets from opinionated documents. For the first time, we use semi-markov conditional random fields for this task and we incorporate word embeddings as features into the learning process. We get comparative results on the benchmark datasets for the subtask of aspect term extraction in SemEval-2014 Task 4 and the subtask of opinion target extraction in SemEval-2015 Task 12. Our results show that word embeddings improve the detection accuracy for aspect terms and opinion targets.",
"title": ""
},
{
"docid": "b9d78f22647d00aab0a79aa0c5dacdcf",
"text": "Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input z to a sample x that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input y′ to a sample x. Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts.",
"title": ""
},
{
"docid": "49663600aeff26af65fbfe39f2ed0161",
"text": "Misuse cases and attack trees have been suggested for security requirements elicitation and threat modeling in software projects. Their use is believed to increase security awareness throughout the software development life cycle. Experiments have identified strengths and weaknesses of both model types. In this paper we present how misuse cases and attack trees can be linked to get a high-level view of the threats towards a system through misuse case diagrams and a more detailed view on each threat through attack trees. Further, we introduce links to security activity descriptions in the form of UML activity graphs. These can be used to describe mitigating security activities for each identified threat. The linking of different models makes most sense when security modeling is supported by tools, and we present the concept of a security repository that is being built to store models and relations such as those presented in this paper.",
"title": ""
},
{
"docid": "2bc608b8b4463dc096af633a621c7d76",
"text": "Given the growth of the service sector, and advances in information technology and communications that facilitate the management of relationships with customers, models of service and relationships are a fast-growing area of marketing science. This article summarizes existing work in this area and identifies promising topics for future research. Models of service and relationships can help managers manage service more efficiently, customize service more effectively, manage customer satisfaction and relationships, and model the financial impact of those customer relationships. Models for managing service have often emphasized analytical approaches to pricing, but emerging issues such as the tradeoff between privacy and customization are attracting increasing attention. The tradeoffs between productivity and customization have also been addressed by both analytical and empirical models, but future research in the area of service customization will likely place increased emphasis on e-service and truly personalized interactions. Relationship models will focus less on models of customer expectations and length of relationship, and more on modeling the effects of dynamic marketing interventions with individual customers. The nature of service relationships increasingly leads to financial impact being assessed within customer and across product, rather than the traditional reverse, suggesting the increasing importance of analyzing customer lifetime value and managing the firm’s customer equity.",
"title": ""
},
{
"docid": "9e315cd14de8f7082be8b0a3160b6552",
"text": "Recently, the percentage of people with hypertension is increasing, and this phenomenon is widely concerned. At the same time, wireless home Blood Pressure (BP) monitors become accessible in people’s life. Since machine learning methods have made important contributions in different fields, many researchers have tried to employ them in dealing with medical problems. However, the existing studies for BP prediction are all based on clinical data with short time ranges. Besides, there do not exist works which can jointly make use of historical measurement data (e.g. BP and heart rate) and contextual data (e.g. age, gender, BMI and altitude). Recurrent Neural Networks (RNNs), especially those using Long Short-Term Memory (LSTM) units, can capture long range dependencies, so they are effective in modeling variable-length sequences. In this paper, we propose a novel model named recurrent models with contextual layer, which can model the sequential measurement data and contextual data simultaneously to predict the trend of users’ BP. We conduct our experiments on the BP data set collected from a type of wireless home BP monitors, and experimental results show that the proposed models outperform several competitive compared methods.",
"title": ""
},
{
"docid": "1e69c1aef1b194a27d150e45607abd5a",
"text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.",
"title": ""
},
{
"docid": "517d6d154c53297192d64d19e23e1a09",
"text": "As computational work becomes more and more integral to many aspects of scientific research, computational reproducibility has become an issue of increasing importance to computer systems researchers and domain scientists alike. Though computational reproducibility seems more straight forward than replicating physical experiments, the complex and rapidly changing nature of computer environments makes being able to reproduce and extend such work a serious challenge. In this paper, I explore common reasons that code developed for one research project cannot be successfully executed or extended by subsequent researchers. I review current approaches to these issues, including virtual machines and workflow systems, and their limitations. I then examine how the popular emerging technology Docker combines several areas from systems research - such as operating system virtualization, cross-platform portability, modular re-usable elements, versioning, and a 'DevOps' philosophy, to address these challenges. I illustrate this with several examples of Docker use with a focus on the R statistical environment.",
"title": ""
},
{
"docid": "7c8b31f03a080aedc3ce501018daa942",
"text": "We propose an architecture for securely resolving IP addresses into hardware addresses over an Ethernet. The proposed architecture consists of a secure server connected to the Ethernet and two protocols: an invite-accept protocol and a request-reply protocol. Each computer connected to the Ethernet can use the invite-accept protocol to periodically record its IP address and its hardware address in the database of the secure server. Each computer can later use the requestreply protocol to obtain the hardware address of any other computer connected to the Ethernet from the database of the secure server. These two protocols are designed to overcome the actions of any adversary that can lose sent messages, arbitrarily modify the fields of sent messages, and replay old messages.",
"title": ""
},
{
"docid": "aaabe81401e33f7e2bb48dd6d5970f9b",
"text": "Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.",
"title": ""
},
{
"docid": "90dcd18ccaa1bddbcce8f540a655abe7",
"text": "Medical organizations find it challenging to adopt cloud-based electronic medical records services, due to the risk of data breaches and the resulting compromise of patient data. Existing authorization models follow a patient centric approach for EHR management where the responsibility of authorizing data access is handled at the patients' end. This however creates a significant overhead for the patient who has to authorize every access of their health record. This is not practical given the multiple personnel involved in providing care and that at times the patient may not be in a state to provide this authorization. Hence there is a need of developing a proper authorization delegation mechanism for safe, secure and easy cloud-based EHR management. We have developed a novel, centralized, attribute based authorization mechanism that uses Attribute Based Encryption (ABE) and allows for delegated secure access of patient records. This mechanism transfers the service management overhead from the patient to the medical organization and allows easy delegation of cloud-based EHR's access authority to the medical providers. In this paper, we describe this novel ABE approach as well as the prototype system that we have created to illustrate it.",
"title": ""
},
{
"docid": "0ad6b4e072d150133d736800110a366b",
"text": "In this paper a trajectory planner for n autonomous vehicles following a common leader is presented, with the planning being accomplished in real time and in a three dimensional setting. The trajectory planner is designed such that n follower vehicles behave as n distinct points of a unique two dimensional trailer attached to a single leader vehicle. We prove that for a wide range of initial conditions the trailer reference frame converges to a unique solution, thus guaranteeing that each follower can plan its trajectory independently from its peers, thereby reducing the need for communications among vehicles. Additionally, convergence to a fixed formation of n+1 vehicles with respect to the trailer reference frame is also guaranteed. Finally, we present bounds on the planned velocity and acceleration, which provide conditions for the feasibility of the planned trajectory. An experimental validation of the planner's behavior is presented with quadrotor vehicles, demonstrating the richness of the planned trajectories.",
"title": ""
},
{
"docid": "7775c00550a6042c38f38bac257ec334",
"text": "Real-world face recognition datasets exhibit long-tail characteristics, which results in biased classifiers in conventionally-trained deep neural networks, or insufficient data when long-tail classes are ignored. In this paper, we propose to handle long-tail classes in the training of a face recognition engine by augmenting their feature space under a center-based feature transfer framework. A Gaussian prior is assumed across all the head (regular) classes and the variance from regular classes are transferred to the long-tail class representation. This encourages the long-tail distribution to be closer to the regular distribution, while enriching and balancing the limited training data. Further, an alternating training regimen is proposed to simultaneously achieve less biased decision boundaries and a more discriminative feature representation. We conduct empirical studies that mimic long-tail datasets by limiting the number of samples and the proportion of long-tail classes on the MS-Celeb-1M dataset. We compare our method with baselines not designed to handle long-tail classes and also with state-of-the-art methods on face recognition benchmarks. State-of-the-art results on LFW, IJB-A and MS-Celeb-1M datasets demonstrate the effectiveness of our feature transfer approach and training strategy. Finally, our feature transfer allows smooth visual interpolation, which demonstrates disentanglement to preserve identity of a class while augmenting its feature space with non-identity variations.",
"title": ""
},
{
"docid": "df067af7f3c2327724bad4c5c1206f70",
"text": "The ultimate objective of any OCR is to simulate the human reading capability. Optical Character Recognition, usually abbreviated to OCR, is the mechanical or electronic translation of images of handwritten, typewritten or printed text into machine. Character Recognition refers to the process of converting printed Text documents into translated Unicode Text. The printed documents are scanned using standard scanners which produce an image of the scanned document. Lines are identified by an algorithm where we identify top and bottom of line and in each line character boundaries are calculated by an algorithm, using these calculation",
"title": ""
},
{
"docid": "a0c9d3c2b14395a6d476b12c5e8b28b0",
"text": "Undergraduate research experiences enhance learning and professional development, but providing effective and scalable research training is often limited by practical implementation and orchestration challenges. We demonstrate Agile Research Studios (ARS)---a socio-technical system that expands research training opportunities by supporting research communities of practice without increasing faculty mentoring resources.",
"title": ""
},
{
"docid": "d015cb7a9afaac66909243de840a446b",
"text": "In the classical job shop scheduling problem (JSSP), n jobs are processed to completion on m unrelated machines. Each job requires processing on each machine exactly once. For each job, technology constraints specify a complete, distinct routing which is fixed and known in advance. Processing times are sequence-independent, fixed, and known in advance. Each machine is continuously available from time zero, and operations are processed without preemption. The objective is to minimize the maximum completion time (makespan). The flexible-routing job shop (FRJS) scheduling problem, or job shop with multipurpose machines, extends JSSP by assuming that a machine may be capable of performing more than one type of operation. (For a given operation, there must exist at least one machine capable of performing it.) FRJS approximates a flexible manufacturing environment with numerically controlled work centers equipped with interchangeable tool magazines. This report extends a dynamic, adaptive tabu search (TS) strategy previously described for job shops with single and multiple instances of single-purpose machines, and applies it to FRJS. We present “proof-of-concept” results for three problems constructed from difficult JSSP instances.",
"title": ""
}
] |
scidocsrr
|
a4b2da91748b18d4dddb49c6d4bc534d
|
Critical Failure Factors in ERP Implementation
|
[
{
"docid": "fe8398493a04c367b089b175711984d7",
"text": "E RP software packages that manage and integrate business processes across organizational functions and locations cost millions of dollars to buy, several times as much to implement, and necessitate disruptive organizational change. While some companies have enjoyed significant gains, others have had to scale back their projects and accept minimal benefits, or even abandon implementation of ERP projects [4]. Historically, a common problem when adopting package software has been the issue of “misfits,” that is, the gaps between the functionality offered by the package and that required by the adopting organization [1, 3]. As a result, organizations have had to choose among adapting to the new functionality, living with the shortfall, instituting workarounds, or customizing the package. ERP software, as a class of package software, also presents this problematic choice to organizations. The problem is exacerbated because ERP implementation is more complex due to cross-module integration, data standardization, adoption of the underlying business model (“best practices”), compressed implementation schedule, and the involvement of a large number of stakeholders. The knowledge gap among implementation personnel is usually significant. Few organizational users underChristina Soh, Sia Siew Kien, and Joanne Tay-Yap",
"title": ""
}
] |
[
{
"docid": "d20068a72753d8c7238b1c0734ed5b2e",
"text": "Left atrial ablation is increasingly used to treat patients with symptomatic atrial fibrillation (AF). Prior to ablation, exclusion of left atrial appendage (LAA) thrombus is important. Whether ECG-gated dual-source computed tomography (DSCT) provides a sensitive means of detecting LAA thrombus in patients undergoing percutaneous AF ablation is unknown. Thus, we sought to determine the utility of ECG-gated DSCT in detecting LAA thrombus in patients with AF. A total of 255 patients (age 58 ± 11 years, 78% male, ejection fraction 58 ± 9%) who underwent ECG-gated DSCT and transesophageal echocardiography (TEE) prior to AF ablation between February 2006 and October 2007 were included. CHADS2 score and demographic data were obtained prospectively. Gated DSCT images were independently reviewed by two cardiac imagers blinded to TEE findings. The LAA was either defined as normal (fully opacified) or abnormal (under-filled) by DSCT. An under-filled LAA was identified in 33 patients (12.9%), of whom four had thrombus confirmed by TEE. All patients diagnosed with LAA thrombus using TEE also had an abnormal LAA by gated DSCT. Thus, sensitivity and specificity for gated DSCT were 100% and 88%, respectively. No cases of LAA filling defects were observed in patients <51 years old with a CHADS2 of 0. In patients referred for AF ablation, thrombus is uncommon in the absence of additional risk factors. Gated DSCT provides excellent sensitivity for the detection of thrombus. Thus, in AF patients with a CHADS2 of 0, gated DSCT may provide a useful stand-alone imaging modality.",
"title": ""
},
{
"docid": "62adf6e18fefdc0cd9284dc0749307c5",
"text": "We demonstrate that it is possible to achieve accurate localization and tracking of a target in a randomly placed wireless sensor network composed of inexpensive components of limited accuracy. The crucial enabler for this is a reasonably accurate local coordinate system aligned with the global coordinates. We present an algorithm for creating such a coordinate system without the use of global control, globally accessible beacon signals, or accurate estimates of inter-sensor distances. The coordinate system is robust and automatically adapts to the failure or addition of sensors. Extensive theoretical analysis and simulation results are presented. Two key theoretical results are: there is a critical minimum average neighborhood size of 15 for good accuracy and there is a fundamental limit on the resolution of any coordinate system determined strictly from local communication. Our simulation results show that we can achieve position accuracy to within 20% of the radio range even when there is variation of up to 10% in the signal strength of the radios. The algorithm improves with finer quantizations of inter-sensor distance estimates: with 6 levels of quantization position errors better than 10% achieved. Finally we show how the algorithm gracefully generalizes to target tracking tasks.",
"title": ""
},
{
"docid": "611fdf1451bdd5c683c5be00f46460b8",
"text": "Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.",
"title": ""
},
{
"docid": "86dc15207ddb57fb6f247017c9ea6abd",
"text": "The distribution of microfilaments and microtubules was examined in pleopod tegumental glands of male and female lobsters (Homarus americanus). Glands were labeled with rhodamine-phalloidin or antibodies to tubulin, the antitubulin antibodies being demonstrated with secondary antibodies conjugated to fluorescein. The labeled glands were then examined using either a Zeiss epifluorescence microscope or a Bio-Rad confocal scanning microscope. Some glands were examined using transmission electron microscopy. Glands from males and females showed the same distribution of microfilaments and microtubules, which appeared most abundantly in the common locus and around the main duct of each rosette. F-actin was specifically found in the central lobe of the central cell, around the ductules of the common locus, surrounding the finger-like projection of secretory cells, and encircling the main duct draining the rosette. Microtubules were most abundant in the finger-like projections of the secretory cells, in the cytoplasm of the central cell and canal cell, and around the main duct of the canal cell. Contraction of the microfilaments may facilitate movement of secretory product from the rosette, while the microtubules may provide structural support for the attenuated finger-like projections and the main duct. Electron micrographs suggest there is some interaction between these two elements of the cytoskeleton.",
"title": ""
},
{
"docid": "dd1fd4f509e385ea8086a45a4379a8b5",
"text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.",
"title": ""
},
{
"docid": "c4b5f77f7cce22bca020fe1aca8df8b4",
"text": "In the field of law there is an absolute need for summarizing the texts of court decisions in order to make the content of the cases easily accessible for legal professionals. During the SALOMON and MOSAIC projects we investigated the summarization and retrieval of legal cases. This article presents some of the main findings while integrating the research results of experiments on legal document summarization by other research groups. In addition, we propose novel avenues of research for automatic text summarization, which we currently exploit when summarizing court decisions in the ACILA project. Techniques for automated concept learning and argument recognition are here the most challenging. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "287f30c5e338fc32a82cf2ec5366c6c5",
"text": "A hallmark of mammalian immunity is the heterogeneity of cell fate that exists among pathogen-experienced lymphocytes. We show that a dividing T lymphocyte initially responding to a microbe exhibits unequal partitioning of proteins that mediate signaling, cell fate specification, and asymmetric cell division. Asymmetric segregation of determinants appears to be coordinated by prolonged interaction between the T cell and its antigen-presenting cell before division. Additionally, the first two daughter T cells displayed phenotypic and functional indicators of being differentially fated toward effector and memory lineages. These results suggest a mechanism by which a single lymphocyte can apportion diverse cell fates necessary for adaptive immunity.",
"title": ""
},
{
"docid": "6d925c32d3900512e0fd0ed36b683c69",
"text": "This paper presents a detailed design process of an ultra-high speed, switched reluctance machine for micro machining. The performance goal of the machine is to reach a maximum rotation speed of 750,000 rpm with an output power of 100 W. The design of the rotor involves reducing aerodynamic drag, avoiding mechanical resonance, and mitigating excessive stress. The design of the stator focuses on meeting the torque requirement while minimizing core loss and copper loss. The performance of the machine and the strength of the rotor structure are both verified through finite-element simulations The final design is a 6/4 switched reluctance machine with a 6mm diameter rotor that is wrapped in a carbon fiber sleeve and exhibits 13.6 W of viscous loss. The stator has shoeless poles and exhibits 19.1 W of electromagnetic loss.",
"title": ""
},
{
"docid": "dbe84ebcf821995c6d7eb64fcbde5381",
"text": "Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the oneand two-sample ttests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small.",
"title": ""
},
{
"docid": "309fef7105de05da3a0e987c1dc1c3cc",
"text": "Flying Ad hoc Network (FANET) is an infrastructure-less multi-hop radio ad hoc network in which Unmanned Aerial Vehicles (UAVs) and Ground Control Station (GCS) collaborates to forward data traffic. Compared to the standard Mobile Ad hoc NETworks (MANETs), the FANET architecture has some specific features (3D mobility, low UAV density, intermittent network connectivity) that bring challenges to the communication protocol design. Such routing protocol must provide safety by finding an accurate and reliable route between UAVs. This safety can be obtained through the use of agile method during software based routing protocol development (for instance the use of Model Driven Development) by mapping each FANET safety requirement into the routing design process. This process must be completed with a sequential safety validation testing with formal verification tools, standardized simulator (by using real simulation environment) and real-world experiments. In this paper, we considered FANET communication safety by presenting design methodologies and evaluations of FANET routing protocols. We use the LARISSA architecture to guarantee the efficiency and accuracy of the whole system. We also use the model driven development methodology to provide model and code consistency through the use of formal verification tools. To complete the FANET safety validation, OMNeT++ simulations (using real UAVs mobility traces) and real FANET outdoor experiments have been carried out. We confront both results to evaluate routing protocol performances and conclude about its safety consideration.",
"title": ""
},
{
"docid": "6bebc44f6d13b3c8b9cf54bbfab6924f",
"text": "There has been considerable interest in teaching \"coding\" to primary school aged students, and many creative \"Initial Learning Environments\" (ILEs) have been released to encourage this. Announcements and commentaries about such developments can polarise opinions, with some calling for widespread teaching of coding, while others see it as too soon to have students learning industry-specific skills. It is not always clear what is meant by teaching coding (which is often used as a synonym for programming), and what the benefits and costs of this are. Here we explore the meaning and potential impact of learning coding/programming for younger students. We collect the arguments for and against learning coding at a young age, and review the initiatives that have been developed to achieve this (including new languages, school curricula, and teaching resources). This leads to a set of criteria around the value of teaching young people to code, to inform curriculum designers, teachers and parents. The age at which coding should be taught can depend on many factors, including the learning tools used, context, teacher training and confidence, culture, specific skills taught, how engaging an ILE is, how much it lets students explore concepts for themselves, and whether opportunities exist to continue learning after an early introduction.",
"title": ""
},
{
"docid": "b3feaaf615ec03030a525825de697cce",
"text": "Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc) that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles) from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.",
"title": ""
},
{
"docid": "82bdaf46188ffa0e2bd555aadaa0957c",
"text": "Smart pills were originally developed for diagnosis; however, they are increasingly being applied to therapy - more specifically drug delivery. In addition to smart drug delivery systems, current research is also looking into localization systems for reaching the target areas, novel locomotion mechanisms and positioning systems. Focusing on the major application fields of such devices, this article reviews smart pills developed for local drug delivery. The review begins with the analysis of the medical needs and socio-economic benefits associated with the use of such devices and moves onto the discussion of the main implemented technological solutions with special attention given to locomotion systems, drug delivery systems and power supply. Finally, desired technical features of a fully autonomous robotic capsule for local drug delivery are defined and future research trends are highlighted.",
"title": ""
},
{
"docid": "47df1c464b766f2dbd1e7e0cc7ccb6b2",
"text": "he past two decades have seen a dramatic change in the role of risk management in corporations. Twenty years ago, the job of the corporate risk manager—typically, a low-level position in the corporate treasury—involved mainly the purchase of insurance. At the same time, treasurers were responsible for the hedging of interest rate and foreign exchange exposures. Over the last ten years, however, corporate risk management has expanded well beyond insurance and the hedging of financial exposures to include a variety of other kinds of risk—notably operational risk, reputational risk, and, most recently, strategic risk. What’s more, at a large and growing number of companies, the risk management function is directed by a senior executive with the title of chief risk officer (CRO) and overseen by a board of directors charged with monitoring risk measures and setting limits for these measures. A corporation can manage risks in one of two fundamentally different ways: (1) one risk at a time, on a largely compartmentalized and decentralized basis; or (2) all risks viewed together within a coordinated and strategic framework. The latter approach is often called “enterprise risk management,” or “ERM” for short. In this article, we suggest that companies that succeed in creating an effective ERM have a long-run competitive advantage over those that manage and monitor risks individually. Our argument in brief is that, by measuring and managing its risks consistently and systematically, and by giving its business managers the information and incentives to optimize the tradeoff between risk and return, a company strengthens its ability to carry out its strategic plan. In the pages that follow, we start by explaining how ERM can give companies a competitive advantage and add value for shareholders. Next we describe the process and challenges involved in implementing ERM. We begin by discussing how a company should assess its risk “appetite,” an assessment that should guide management’s decision about how much and which risks to retain and which to lay off. Then we show how companies should measure their risks. Third, we discuss various means of laying off “non-core” risks, which, as we argue below, increases the firm’s capacity for bearing those “core” risks the firm chooses to retain. Though ERM is conceptually straightforward, its implementation is not. And in the last—and longest—section of the chapter, we provide an extensive guide to the major difficulties that arise in practice when implementing ERM.",
"title": ""
},
{
"docid": "3ce574cede850ade17a9600a54c7adbf",
"text": "Cloud computing is an emerging and fast-growing computing paradigm that has gained great interest from both industry and academia. Consequently, many researchers are actively involved in cloud computing research projects. One major challenge facing cloud computing researchers is the lack of a comprehensive cloud computing experimental tool to use in their studies. This paper introduces CloudExp, a modeling and simulation environment for cloud computing. CloudExp can be used to evaluate a wide spectrum of cloud components such as processing elements, data centers, storage, networking, Service Level Agreement (SLA) constraints, web-based applications, Service Oriented Architecture (SOA), virtualization, management and automation, and Business Process Management (BPM). Moreover, CloudExp introduces the Rain workload generator which emulates real workloads in cloud environments. Also, MapReduce processing model is integrated in CloudExp in order to handle the processing of big data problems. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "50aee861791971360a3cb947eca34f31",
"text": "Computer games have become an ever-increasing part of many adolescents' day-to-day lives. Coupled with this phenomenon, reports of excessive gaming (computer game playing) denominated as \"computer/video game addiction\" have been discussed in the popular press as well as in recent scientific research. The aim of the present study was the investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes and behavior. A sample comprising of 7069 gamers answered two questionnaires online. Data revealed that 11.9% of participants (840 gamers) fulfilled diagnostic criteria of addiction concerning their gaming behavior, while there is only weak evidence for the assumption that aggressive behavior is interrelated with excessive gaming in general. Results of this study contribute to the assumption that also playing games without monetary reward meets criteria of addiction. Hence, an addictive potential of gaming should be taken into consideration regarding prevention and intervention.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
},
{
"docid": "7edb29f1b41347995febb525cc4cba2e",
"text": "Keyword queries enjoy widespread usage as they represent an intuitive way of specifying information needs. Recently, answering keyword queries on graph-structured data has emerged as an important research topic. The prevalent approaches build on dedicated indexing techniques as well as search algorithms aiming at finding substructures that connect the data elements matching the keywords. In this paper, we introduce a novel keyword search paradigm for graph-structured data, focusing in particular on the RDF data model. Instead of computing answers directly as in previous approaches, we first compute queries from the keywords, allowing the user to choose the appropriate query, and finally, process the query using the underlying database engine. Thereby, the full range of database optimization techniques can be leveraged for query processing. For the computation of queries, we propose a novel algorithm for the exploration of top-k matching subgraphs. While related techniques search the best answer trees, our algorithm is guaranteed to compute all k subgraphs with lowest costs, including cyclic graphs. By performing exploration only on a summary data structure derived from the data graph, we achieve promising performance improvements compared to other approaches.",
"title": ""
},
{
"docid": "dde9424652393fa66350ec6510c20e97",
"text": "Framed under a cognitive approach to task-based L2 learning, this study used a pedagogical approach to investigate the effects of three vocabulary lessons (one traditional and two task-based) on acquisition of basic meanings, forms and morphological aspects of Spanish words. Quantitative analysis performed on the data suggests that the type of pedagogical approach had no impact on immediate retrieval (after treatment) of targeted word forms, but it had an impact on long-term retrieval (one week) of targeted forms. In particular, task-based lessons seemed to be more effective than the Presentation, Practice and Production (PPP) lesson. The analysis also suggests that a task-based lesson with an explicit focus-on-forms component was more effective than a task-based lesson that did not incorporate this component in promoting acquisition of word morphological aspects. The results also indicate that the explicit focus on forms component may be more effective when placed at the end of the lesson, when meaning has been acquired. Results are explained in terms of qualitative differences in amounts of focus on form and meaning, type of form-focused instruction provided, and opportunities for on-line targeted output retrieval. The findings of this study provide evidence for the value of a proactive (Doughty and Williams, 1998a) form-focused approach to Task-Based L2 vocabulary learning, especially structure-based production tasks (Ellis, 2003). Overall, they suggest an important role of pedagogical tasks in teaching L2 vocabulary.",
"title": ""
}
] |
scidocsrr
|
d1e2e101efdaa328d9763f7f527ae7fb
|
Some economics of private digital currency
|
[
{
"docid": "4bfb389e1ae2433f797458ff3fe89807",
"text": "Many if not most markets with network externalities are two-sided. To succeed, platforms in industries such as software, portals and media, payment systems and the Internet, must “get both sides of the market on board ”. Accordingly, platforms devote much attention to their business model, that is to how they court each side while making money overall. The paper builds a model of platform competition with two-sided markets. It unveils the determinants of price allocation and enduser surplus for different governance structures (profit-maximizing platforms and not-for-profit joint undertakings), and compares the outcomes with those under an integrated monopolist and a Ramsey planner.",
"title": ""
}
] |
[
{
"docid": "f3860c0ed0803759e44133a0110a60bb",
"text": "Using comment information available from Digg we define a co-participation network between users. We focus on the analysis of this implicit network, and study the behavioral characteristics of users. Using an entropy measure, we infer that users at Digg are not highly focused and participate across a wide range of topics. We also use the comment data and social network derived features to predict the popularity of online content linked at Digg using a classification and regression framework. We show promising results for predicting the popularity scores even after limiting our feature extraction to the first few hours of comment activity that follows a Digg submission.",
"title": ""
},
{
"docid": "f0b32c584029cd407fd350ddd9d00e70",
"text": "Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challenging problem which can be addressed with distributed dynamic load balancing systems. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.",
"title": ""
},
{
"docid": "d70ea405a182c4de3f50858599f84ad8",
"text": "Oral lichen planus (OLP) has a prevalence of approximately 1%. The etiopathogenesis is poorly understood. The annual malignant transformation is less than 0.5%. There are no effective means to either predict or to prevent such event. Oral lesions may occur that to some extent look like lichen planus but lacking the characteristic features of OLP, or that are indistinguishable from OLP clinically but having a distinct cause, e.g. amalgam restoration associated. Such lesions are referred to as oral lichenoid lesions (OLLs). The management of OLP and the various OLLs may be different. Therefore, accurate diagnosis should be aimed at.",
"title": ""
},
{
"docid": "e6e0452c62ec807df99aadf660e3193d",
"text": "Bacteria have been widely used as starter cultures in the food industry, notably for the fermentation of milk into dairy products such as cheese and yogurt. Lactic acid bacteria used in food manufacturing, such as lactobacilli, lactococci, streptococci, Leuconostoc, pediococci, and bifidobacteria, are selectively formulated based on functional characteristics that provide idiosyncratic flavor and texture attributes, as well as their ability to withstand processing and manufacturing conditions. Unfortunately, given frequent viral exposure in industrial environments, starter culture selection and development rely on defense systems that provide resistance against bacteriophage predation, including restriction-modification, abortive infection, and recently discovered CRISPRs (clustered regularly interspaced short palindromic repeats). CRISPRs, together with CRISPR-associated genes (cas), form the CRISPR/Cas immune system, which provides adaptive immunity against phages and invasive genetic elements. The immunization process is based on the incorporation of short DNA sequences from virulent phages into the CRISPR locus. Subsequently, CRISPR transcripts are processed into small interfering RNAs that guide a multifunctional protein complex to recognize and cleave matching foreign DNA. Hypervariable CRISPR loci provide insights into the phage and host population dynamics, and new avenues for enhanced phage resistance and genetic typing and tagging of industrial strains.",
"title": ""
},
{
"docid": "6573162f8feacae5f121f69780534527",
"text": "Larger fields in the Middle-size league as well as the effort to build mixed teams from different universities require a simulation environment which is capable to physically correctly simulate the robots and the environment. A standardized simulation environment has not yet been proposed for this league. In this paper we present our simulation environment, which is based on the Gazebo system. We show how typical Middle-size robots with features like omni-drives and omni-directional cameras can be modeled with relative ease. In particular, the control software for the real robots can be used with few changes, thus facilitating the transfer of results obtained in simulation back to the robots. We address some technical issues such as adapting time-triggered events in the robot control software to the simulation, and we introduce the concept of multi-level abstractions. The latter allows switching between faithful but computionally expensive sensor models and abstract but cheap approximations. These abstractions are needed especially when simulating whole teams of robots.",
"title": ""
},
{
"docid": "5d8bc7d7c3ca5f8ebef7cbdace5a5db2",
"text": "The concept of knowledge management (KM) as a powerful competitive weapon has been strongly emphasized in the strategic management literature, yet the sustainability of the competitive advantage provided by KM capability is not well-explained. To fill this gap, this paper develops the concept of KM as an organizational capability and empirically examines the association between KM capabilities and competitive advantage. In order to provide a better presentation of significant relationships, through resource-based view of the firm explicitly recognizes important of KM resources and capabilities. Firm specific KM resources are classified as social KM resources, and technical KM resources. Surveys collected from 177 firms were analyzed and tested. The results confirmed the impact of social KM resource on competitive advantage. Technical KM resource is negatively related with competitive advantage, and KM capability is significantly related with competitive advantage. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "74273502995ceaac87737d274379d7dc",
"text": "Majority of the systems designed to handle big RDF data rely on a single high-end computer dedicated to a certain RDF dataset and do not easily scale out, at the same time several clustered solution were tested and both the features and the benchmark results were unsatisfying. In this paper we describe a system designed to tackle such issues, a system that connects RDF4J and Apache HBase in order to receive an extremely scalable RDF store.",
"title": ""
},
{
"docid": "0107bb438c2a117abfdb55b09721461f",
"text": "Recently, there is a growing interest of research on the relationship of gut-microbiota and neurological disorders. Increasing number of findings suggests the broader role of gut-microbiota in the modulation of various physiological and pathological conditions and it is now well recognized that a bidirectional communication between brain and gut-microbiota is essential to maintain homeostasis. The gut-brain axis includes central nervous system (CNS), the neuroendocrine and neuroimmune systems, autonomic nervous system, enteric nervous system, and intestinal microbiota. Probiotics (i.e., live microorganisms similar to beneficial microorganisms found in the human gut) are reported to modulate a number of disorders including metabolic disorders, behavioral conditions and cognitive functions. This review covers the significance of gut-brain axis in relation to the overall mental well-being. Apart from the recent studies highlighting the importance of gut-brain axis, here we also reviewed the interaction of few herbal medicines with gut-brain axis. Animal studies have indicated that some herbs or their isolated constituents alter the normal gut flora and have prominent effect on behavioral condition such as anxiety depression and cognition. Thus alteration of gut-brain axis by traditional medicines will be a potential strategy for the management of comorbid CNS disorders and gastrointestinal problems.",
"title": ""
},
{
"docid": "79af484aeb0777371891929c55603eb0",
"text": "No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory's predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.",
"title": ""
},
{
"docid": "eaefba9984e024ba62f99b875f3194ad",
"text": "Image restoration algorithms are typically evaluated by some distortion measure (e.g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify perceived perceptual quality. In this paper, we prove mathematically that distortion and perceptual quality are at odds with each other. Specifically, we study the optimal probability for correctly discriminating the outputs of an image restoration algorithm from real images. We show that as the mean distortion decreases, this probability must increase (indicating worse perceptual quality). As opposed to the common belief, this result holds true for any distortion measure, and is not only a problem of the PSNR or SSIM criteria. However, as we show experimentally, for some measures it is less severe (e.g. distance between VGG features). We also show that generative-adversarial-nets (GANs) provide a principled way to approach the perception-distortion bound. This constitutes theoretical support to their observed success in low-level vision tasks. Based on our analysis, we propose a new methodology for evaluating image restoration methods, and use it to perform an extensive comparison between recent super-resolution algorithms.",
"title": ""
},
{
"docid": "6987e20daf52bcf25afe6a7f0a95a730",
"text": "Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.",
"title": ""
},
{
"docid": "1b97a93d4d975e2ea4082616ccd11948",
"text": "This paper presents an optimized wind energy harvesting (WEH) system that uses a specially designed ultra-low-power-management circuit for sustaining the operation of a wireless sensor node. The proposed power management circuit has two distinct features: 1) an active rectifier using MOSFETs for rectifying the low amplitude ac voltage generated by the wind turbine generator under low wind speed condition efficiently and 2) a dc-dc boost converter with resistor emulation algorithm to perform maximum power point tracking (MPPT) under varying wind-speed conditions. As compared to the conventional diode-bridge rectifier, it is shown that the efficiency of the active rectifier with a low input voltage of 1.2 V has been increased from 40% to 70% due to the significant reduction in the ON-state voltage drop (from 0.6 to 0.15 V) across each pair of MOSFETs used. The proposed robust low-power microcontroller-based resistance emulator is implemented with closed-loop resistance feedback control to ensure close impedance matching between the source and the load, resulting in an efficient power conversion. From the experimental test results obtained, an average electrical power of 7.86 mW is harvested by the optimized WEH system at an average wind speed of 3.62 m/s, which is almost four times higher than the conventional energy harvesting method without using the MPPT.",
"title": ""
},
{
"docid": "288f831e93e83b86d28624e31bb2f16c",
"text": "Deep learning has made significant improvements at many image processing tasks in recent years, such as image classification, object recognition and object detection. Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection & recognition problems and computer vision tasks. However, the number of parameters in a CNN is too high such that the computers require more energy and larger memory size. In order to solve this problem, we propose a novel energy efficient model Binary Weight and Hadamard-transformed Image Network (BWHIN), which is a combination of Binary Weight Network (BWN) and Hadamard-transformed Image Network (HIN). It is observed that energy efficiency is achieved with a slight sacrifice at classification accuracy. Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.",
"title": ""
},
{
"docid": "e0d553cc4ca27ce67116c62c49c53d23",
"text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.",
"title": ""
},
{
"docid": "cf281d60ea830892a441bc91fe05ab72",
"text": "The signal-to-noise ratio (SNR) is the gold standard metric for capturing wireless link quality, but offers limited predictability. Recent work shows that frequency diversity causes limited predictability in SNR, and proposes effective SNR. Owing to its significant improvement over SNR, effective SNR has become a widely adopted metric for measuring wireless channel quality and served as the basis for many recent rate adaptation schemes. In this paper, we first conduct trace driven evaluation, and find that the accuracy of effective SNR is still inadequate due to frequency diversity and bursty errors. While common wisdom says that interleaving should remove the bursty errors, bursty errors still persist under the WiFi interleaver. Therefore, we develop two complementary methods for computing frame delivery rate to capture the bursty errors under the WiFi interleaver. We then design a new interleaver to reduce the burstiness of errors, and improve the frame delivery rate. We further design a rate adaptation scheme based on our delivery rate estimation. It can support both WiFi and our interleaver. Using extensive evaluation, we show our delivery rate estimation is accurate and significantly out-performs effective SNR; our interleaver improves the delivery rate over the WiFi interleaver; and our rate adaptation improves both throughput and energy.",
"title": ""
},
{
"docid": "41d97d98a524e5f1e45ae724017819d9",
"text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.",
"title": ""
},
{
"docid": "fd27a21d2eaf5fc5b37d4cba6bd4dbef",
"text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: rmfelder@mindspring.com The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?",
"title": ""
},
{
"docid": "e2c4c7e45080c9eb6f99be047ee65958",
"text": "This paper describes the current state of mu.semte.ch, a platform for building state-of-the-art web applications fuelled by Linked Data aware microservices. The platform assumes a mashup-like construction of single page web applications which consume various services. In order to reuse tooling built in the community, Linked Data is not pushed to the frontend.",
"title": ""
},
{
"docid": "ba16a6634b415dd2c478c83e1f65cb3c",
"text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"title": ""
},
{
"docid": "3377a3aa9dc965b49593f6464558d0c4",
"text": "Recognizing human actions in untrimmed videos is an important challenging task. An effective 3D motion representation and a powerful learning model are two key factors influencing recognition performance. In this paper we introduce a new skeletonbased representation for 3D action recognition in videos. The key idea of the proposed representation is to transform 3D joint coordinates of the human body carried in skeleton sequences into RGB images via a color encoding process. By normalizing the 3D joint coordinates and dividing each skeleton frame into five parts, where the joints are concatenated according to the order of their physical connections, the color-coded representation is able to represent spatio-temporal evolutions of complex 3D motions, independently of the length of each sequence. We then design and train different Deep Convolutional Neural Networks (D-CNNs) based on the Residual Network architecture (ResNet) on the obtained image-based representations to learn 3D motion features and classify them into classes. Our method is evaluated on two widely used action recognition benchmarks: MSR Action3D and NTU-RGB+D, a very large-scale dataset for 3D human action recognition. The experimental results demonstrate that the proposed method outperforms previous state-of-the-art approaches whilst requiring less computation for training and prediction.",
"title": ""
}
] |
scidocsrr
|
bdfd33f0d74967194e35eb110a9e12ee
|
A tendon skeletal finger model for evaluation of pinching effort
|
[
{
"docid": "8405f30ca5f4bd671b056e9ca1f4d8df",
"text": "The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.",
"title": ""
}
] |
[
{
"docid": "dc7fb9e9ef95fa438b242e24517b6d36",
"text": "The representation of candidate solutions and the variation operators are fundamental design choices in an evolutionary algorithm (EA). This paper proposes a novel representation technique and suitable variation operators for the degree-constrained minimum spanning tree problem. For a weighted, undirected graphG(V, E), this problem seeks to identify the shortest spanning tree whose node degrees do not exceed an upper bound d ≥ 2. Within the EA, a candidate spanning tree is simply represented by its set of edges. Special initialization, crossover, and mutation operators are used to generate new, always feasible candidate solutions. In contrast to previous spanning tree representations, the proposed approach provides substantially higher locality and is nevertheless computationally efficient; an offspring is always created in O(|V |) time. In addition, it is shown how problemdependent heuristics can be effectively incorporated into the initialization, crossover, and mutation operators without increasing the time-complexity. Empirical results are presented for hard problem instances with up to 500 vertices. Usually, the new approach identifies solutions superior to those of several other optimization methods within few seconds. The basic ideas of this EA are also applicable to other network optimization tasks.",
"title": ""
},
{
"docid": "7c9642705d402fe5dcbfac12bd35b393",
"text": "The idea of reserve against brain damage stems from the repeated observation that there does not appear to be a direct relationship between the degree of brain pathology or brain damage and the clinical manifestation of that damage. This paper attempts to develop a coherent theoretical account of reserve. One convenient subdivision of reserve models revolves around whether they envision reserve as a passive process, such as in brain reserve or threshold, or see the brain as actively attempting to cope with or compensate for pathology, as in cognitive reserve. Cognitive reserve may be based on more efficient utilization of brain networks or of enhanced ability to recruit alternate brain networks as needed. A distinction is suggested between reserve, the ability to optimize or maximize normal performance, and compensation, an attempt to maximize performance in the face of brain damage by using brain structures or networks not engaged when the brain is not damaged. Epidemiologic and imaging data that help to develop and support the concept of reserve are presented.",
"title": ""
},
{
"docid": "fe407f4983ef6cc2e257d63a173c8487",
"text": "We present a semantically rich graph representation for indoor robotic navigation. Our graph representation encodes: semantic locations such as offices or corridors as nodes, and navigational behaviors such as enter office or cross a corridor as edges. In particular, our navigational behaviors operate directly from visual inputs to produce motor controls and are implemented with deep learning architectures. This enables the robot to avoid explicit computation of its precise location or the geometry of the environment, and enables navigation at a higher level of semantic abstraction. We evaluate the effectiveness of our representation by simulating navigation tasks in a large number of virtual environments. Our results show that using a simple sets of perceptual and navigational behaviors, the proposed approach can successfully guide the way of the robot as it completes navigational missions such as going to a specific office. Furthermore, our implementation shows to be effective to control the selection and switching of behaviors.",
"title": ""
},
{
"docid": "f8bebcf8d9b544c82af547865672b06a",
"text": "An instance with a bad mask might make a composite image that uses it look fake. This encourages us to learn segmentation by generating realistic composite images. To achieve this, we propose a novel framework that exploits a new proposed prior called the independence prior based on Generative Adversarial Networks (GANs). The generator produces an image with multiple category-specific instance providers, a layout module and a composition module. Firstly, each provider independently outputs a category-specific instance image with a soft mask. Then the provided instances’ poses are corrected by the layout module. Lastly, the composition module combines these instances into a final image. Training with adversarial loss and penalty for mask area, each provider learns a mask that is as small as possible but enough to cover a complete category-specific instance. Weakly supervised semantic segmentation methods widely use grouping cues modeling the association between image parts, which are either artificially designed or learned with costly segmentation labels or only modeled on local pairs. Unlike them, our method automatically models the dependence between any parts and learns instance segmentation. We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.",
"title": ""
},
{
"docid": "59bc11cd78549304225ab630ef0f5701",
"text": "This study presents and examines SamEx, a mobile learning system used by 305 students in formal and informal learning in a primary school in Singapore. Students use SamEx in situ to capture media such as pictures, video clips and audio recordings, comment on them, and share them with their peers. In this paper we report on the experiences of students in using the application throughout a one-year period with a focus on self-directedness, quality of contributions, and answers to contextual question prompts. We examine how the usage of tools such as SamEx predicts students' science examination results, discuss the role of badges as an extrinsic motivational tool, and explore how individual and collaborative learning emerge. Our research shows that the quantity and quality of contributions provided by the students in SamEx predict the end-year assessment score. With respect to specific system features, contextual answers given by the students and the overall likes received by students are also correlated with the end-year assessment score. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1c4930b976f35488e9df6ead74358878",
"text": "The covalently modified ureido-conjugated chitosan/TPP multifunctional nanoparticles have been developed as targeted nanomedicine delivery system for eradication of Helicobacter pylori. H. pylori can specifically express the urea transport protein on its membrane to transport urea into cytoplasm for urease to produce ammonia, which protects the bacterium in the acid milieu of stomach. The clinical applicability of topical antimicrobial agent is needed to eradicate H. pylori in the infected fundal area. In this study, we designed and synthesized two ureido-conjugated chitosan derivatives UCCs-1 and UCCs-2 for preparation of multifunctional nanoparticles. The process was optimized in order to prepare UCCs/TPP nanoparticles for encapsulation of amoxicillin. The results showed that the amoxicillin-UCCs/TPP nanoparticles exhibited favorable pH-sensitive characteristics, which could procrastinate the release of amoxicillin at gastric acids and enable the drug to deliver and target to H. pylori at its survival region effectively. Compared with unmodified amoxicillin-chitosan/TPP nanoparticles, a more specific and effective H. pylori growth inhibition was observed for amoxicillin-UCCs/TPP nanoparticles. Drug uptake analysis tested by flow cytometry and confocal laser scanning microscopy verified that the uptake of FITC-UCCs-2/TPP nanoparticles was associated with urea transport protein on the membrane of H. pylori and reduced with the addition of urea as competitive transport substrate. These findings suggest that the multifunctional amoxicillin-loaded nanoparticles have great potential for effective therapy of H. pylori infection. They may also serve as pharmacologically effective nanocarriers for oral targeted delivery of other therapeutic drugs to treat H. pylori.",
"title": ""
},
{
"docid": "3e3953e09f35c418316370f2318550aa",
"text": "Poker is ideal for testing automated reason ing under uncertainty. It introduces un certainty both by physical randomization and by incomplete information about op ponents' hands. Another source of uncer tainty is the limited information available to construct psychological models of opponents, their tendencies to bluff, play conservatively, reveal weakness, etc. and the relation be tween their hand strengths and betting be haviour. All of these uncertainties must be assessed accurately and combined effectively for any reasonable level of skill in the game to be achieved, since good decision making is highly sensitive to those tasks. We de scribe our Bayesian Poker Program (BPP) , which uses a Bayesian network to model the program's poker hand, the opponent's hand and the opponent's playing behaviour con ditioned upon the hand, and betting curves which govern play given a probability of win ning. The history of play with opponents is used to improve BPP's understanding of their behaviour. We compare BPP experimentally with: a simple rule-based system; a program which depends exclusively on hand probabil ities (i.e., without opponent modeling); and with human players. BPP has shown itself to be an effective player against all these opponents, barring the better humans. We also sketch out some likely ways of improv ing play.",
"title": ""
},
{
"docid": "52deb6870cc5e998c9f61132fd763bdd",
"text": "BACKGROUND\nThe burden of malaria is a key challenge to both human and economic development in malaria endemic countries. The impact of malaria can be categorized from three dimensions, namely: health, social and economic. The objective of this study was to estimate the impact of malaria morbidity on gross domestic product (GDP) of Uganda.\n\n\nMETHODS\nThe impact of malaria morbidity on GDP of Uganda was estimated using double-log econometric model. The 1997-2003 time series macro-data used in the analysis were for 28 quarters, i.e. 7 years times 4 quarters per year. It was obtained from national and international secondary sources.\n\n\nRESULTS\nThe slope coefficient for Malaria Index (M) was -0.00767; which indicates that when malaria morbidity increases by one unit, while holding all other explanatory variables constant, per capita GDP decreases by US$0.00767 per year. In 2003 Uganda lost US$ 49,825,003 of GDP due to malaria morbidity. Dividing the total loss of US$49.8 million by a population of 25,827,000 yields a loss in GDP of US$1.93 per person in Uganda in 2003.\n\n\nCONCLUSION\nMalaria morbidity results in a substantive loss in GDP of Uganda. The high burden of malaria leads to decreased long-term economic growth, and works against poverty eradication efforts and socioeconomic development of the country.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "e4000835f1870399c4270492fb81694b",
"text": "In this paper, a new design of mm-Wave phased array 5G antenna for multiple-input multiple-output (MIMO) applications has been introduced. Two identical linear phased arrays with eight leaf-shaped bow-tie antenna elements have been used at different sides of the mobile-phone PCB. An Arlon AR 350 dielectric with properties of h=0.5 mm, ε=3.5, and δ=0.0026 has been used as a substrate of the proposed design. The antenna is working in the frequency range of 25 to 40 GHz (more than 45% FBW) and can be easily fit into current handheld devices. The proposed MIMO antenna has good radiation performances at 28 and 38 GHz which both are powerful candidates to be the carrier frequency of the future 5G cellular networks.",
"title": ""
},
{
"docid": "51ea8936c266077b1522d1d953d356ec",
"text": "Speech data typically contains task irrelevant information lying within features. Specifically, phonetic information, speaker characteristic information, emotional information and noise are always mixed together and tend to impair one another for certain task. We propose a new type of auto-encoder for feature learning called contrastive auto-encoder. Unlike other variants of auto-encoders, contrastive auto-encoder is able to leverage class labels in constructing its representation layer. We achieve this by modeling two autoencoders together and making their differences contribute to the total loss function. The transformation built with contrastive auto-encoder can be seen as a task-specific and invariant feature learner. Our experiments on TIMIT clearly show the superiority of the feature extracted from contrastive auto-encoder over original acoustic feature, feature extracted from deep auto-encoder, and feature extracted from a model that contrastive auto-encoder originates from.",
"title": ""
},
{
"docid": "f0a82f428ac508351ffa7b97bb909b60",
"text": "Automated Teller Machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, and cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. This paper provides a general review on studies, efforts and development in ATMs location and cash management problem. Keywords—ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs.",
"title": ""
},
{
"docid": "55501bfdebc94f1f78eafbe33beff5a3",
"text": "OBJECTIVES/HYPOTHESIS\nCurrent three-dimensional (3D) printed simulations are complicated by insufficient void spaces and inconsistent density. We describe a novel simulation with focus on internal anatomic fidelity and evaluate against template/identical cadaveric education.\n\n\nSTUDY DESIGN\nResearch ethics board-approved prospective cohort study.\n\n\nMETHODS\nGeneration of a 3D printed temporal bone was performed using a proprietary algorithm that deconstructs the digital model into slices prior to printing. This supplemental process facilitates removal of residual material from air-containing spaces and permits requisite infiltrative access to the all regions of the model. Ten otolaryngology trainees dissected a cadaveric temporal bone (CTB) followed by a matched/isomorphic 3D printed bone model (PBM), based on derivative micro-computed tomography data. Participants rated 1) physical characteristics, 2) specific anatomic constructs, 3) usefulness in skill development, and 4) perceived educational value. The survey instrument employed a seven-point Likert scale.\n\n\nRESULTS\nTrainees felt physical characteristics of the PBM were quite similar to CTB, with highly ranked cortical (5.5 ± 1.5) and trabecular (5.2 ± 1.3) bone drill quality. The overall model was considered comparable to CTB (5.9 ± 0.74), with respectable air cell reproduction (6.1 ± 1.1). Internal constructs were rated as satisfactory (range, 4.9-6.2). The simulation was considered a beneficial training tool for all types of mastoidectomy (range, 5.9-6.6), posterior tympanotomy (6.5 ± 0.71), and skull base approaches (range, 6-6.5). Participants believed the model to be an effective training instrument (6.7 ± 0.68), which should be incorporated into the temporal bone lab (7.0 ± 0.0). The PBM was thought to improve confidence (6.7 ± 0.68) and operative performance (6.7 ± 0.48).\n\n\nCONCLUSIONS\nStudy participants found the PBM to be an effective platform that compared favorably to CTB. The model was considered a valuable adjunctive training tool with both realistic mechanical and visual character.\n\n\nLEVEL OF EVIDENCE\nNA",
"title": ""
},
{
"docid": "9bba22f8f70690bee5536820567546e6",
"text": "Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic, and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper, we present a new algorithm-a convexified version of maximum likelihood-for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semirandom graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs, planted clique/coloring, and so on. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors.",
"title": ""
},
{
"docid": "fdb9da0c4b6225c69de16411c79ac9dc",
"text": "Phylogenetic analyses reveal the evolutionary derivation of species. A phylogenetic tree can be inferred from multiple sequence alignments of proteins or genes. The alignment of whole genome sequences of higher eukaryotes is a computational intensive and ambitious task as is the computation of phylogenetic trees based on these alignments. To overcome these limitations, we here used an alignment-free method to compare genomes of the Brassicales clade. For each nucleotide sequence a Chaos Game Representation (CGR) can be computed, which represents each nucleotide of the sequence as a point in a square defined by the four nucleotides as vertices. Each CGR is therefore a unique fingerprint of the underlying sequence. If the CGRs are divided by grid lines each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence (Frequency Chaos Game Representation, FCGR). Here, we used distance measures between FCGRs to infer phylogenetic trees of Brassicales species. Three types of data were analyzed because of their different characteristics: (A) Whole genome assemblies as far as available for species belonging to the Malvidae taxon. (B) EST data of species of the Brassicales clade. (C) Mitochondrial genomes of the Rosids branch, a supergroup of the Malvidae. The trees reconstructed based on the Euclidean distance method are in general agreement with single gene trees. The Fitch-Margoliash and Neighbor joining algorithms resulted in similar to identical trees. Here, for the first time we have applied the bootstrap re-sampling concept to trees based on FCGRs to determine the support of the branchings. FCGRs have the advantage that they are fast to calculate, and can be used as additional information to alignment based data and morphological characteristics to improve the phylogenetic classification of species in ambiguous cases.",
"title": ""
},
{
"docid": "30e6f3f88575a82ac47a4f383924dbb2",
"text": "The aircraft industry is developing the more electric aircraft (MEA) with an ultimate goal of distributing only electrical power across the airframe. The replacement of existing systems with electric equivalents has, and will continue to, significantly increase the electrical power requirement. This has created a need for the enhancement of generation capacity and changes to distribution systems. The higher powers will push distribution voltages higher in order to limit conduction losses and reduce cable size, and hence weight. A power electronic interface may be required to regulate generator output into the distributed power form.",
"title": ""
},
{
"docid": "a9814f2847c6e1bf66893e4fa1a9c50e",
"text": "This paper is aimed at obtaining some new lower and upper bounds for the functions cosx , sinx/x , x/coshx , thus establishing inequalities involving circulr, hyperbolic and exponential functions.",
"title": ""
},
{
"docid": "9bb88b82789d43e48b1e8a10701d39bd",
"text": "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many artificial intelligence–related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. In this article, we review several popular deep learning models, including deep belief networks and deep Boltzmann machines. We show that (a) these deep generative models, which contain many layers of latent variables and millions of parameters, can be learned efficiently, and (b) the learned high-level feature representations can be successfully applied in many application domains, including visual object recognition, information retrieval, classification, and regression tasks.",
"title": ""
},
{
"docid": "3e7adbc4ea0bb5183792efd19d3c23a5",
"text": "a Faculty of Science and Information Technology, Al-Zaytoona University of Jordan, Amman, Jordan b School of Informatics, University of Bradford, Bradford BD7 1DP, United Kingdom c Information & Computer Science Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia d Centre for excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, G1 1XW, United Kingdom",
"title": ""
}
] |
scidocsrr
|
ca74961f04b58e1d20502f502794e8a0
|
Getting the Message?: A Study of Explanation Interfaces for Microblog Data Analysis
|
[
{
"docid": "eda242b58e5ed2a2736cb7cccc73220e",
"text": "This paper presents an interactive hybrid recommendation system that generates item predictions from multiple social and semantic web resources, such as Wikipedia, Facebook, and Twitter. The system employs hybrid techniques from traditional recommender system literature, in addition to a novel interactive interface which serves to explain the recommendation process and elicit preferences from the end user. We present an evaluation that compares different interactive and non-interactive hybrid strategies for computing recommendations across diverse social and semantic web APIs. Results of the study indicate that explanation and interaction with a visual representation of the hybrid system increase user satisfaction and relevance of predicted content.",
"title": ""
}
] |
[
{
"docid": "97d1f0c14edeedd8348058b50fae653b",
"text": "A high-efficiency self-shielded microstrip-fed Yagi-Uda antenna has been developed for 60 GHz communications. The antenna is built on a Teflon substrate (εr = 2.2) with a thickness of 10 mils (0.254 mm). A 7-element design results in a measured S11 of <; -10 dB at 56.0 - 66.4 GHz with a gain >; 9.5 dBi at 58 - 63 GHz. The antenna shows excellent performance in free space and in the presence of metal-planes used for shielding purposes. A parametric study is done with metal plane heights from 2 mm to 11 mm, and the Yagi-Uda antenna results in a gain >; 12 dBi at 58 - 63 GHz for h = 5 - 8 mm. A 60 GHz four-element switched-beam Yagi-Uda array is also presented with top and bottom shielding planes, and allows for 180° angular coverage with <; 3 dB amplitude variations. This antenna is ideal for inclusion in complex platforms, such as laptops, for point-to-point communication systems, either as a single element or a switched-beam system.",
"title": ""
},
{
"docid": "2e6c0d221b018569ad7dc10204cbf64e",
"text": "Vehicle re-identification is an important problem and has many applications in video surveillance and intelligent transportation. It gains increasing attention because of the recent advances of person re-identification techniques. However, unlike person re-identification, the visual differences between pairs of vehicle images are usually subtle and even challenging for humans to distinguish. Incorporating additional spatio-temporal information is vital for solving the challenging re-identification task. Existing vehicle re-identification methods ignored or used oversimplified models for the spatio-temporal relations between vehicle images. In this paper, we propose a two-stage framework that incorporates complex spatio-temporal information for effectively regularizing the re-identification results. Given a pair of vehicle images with their spatiotemporal information, a candidate visual-spatio-temporal path is first generated by a chain MRF model with a deeply learned potential function, where each visual-spatiotemporal state corresponds to an actual vehicle image with its spatio-temporal information. A Siamese-CNN+Path- LSTM model takes the candidate path as well as the pairwise queries to generate their similarity score. Extensive experiments and analysis show the effectiveness of our proposed method and individual components.",
"title": ""
},
{
"docid": "d1b5b74db9e1a9fef2f91d3917940d94",
"text": "Relational databases are providing storage for several decades now. However for today's interactive web and mobile applications the importance of flexibility and scalability in data model can not be over-stated. The term NoSQL broadly covers all non-relational databases that provide schema-less and scalable model. NoSQL databases which are also termed as Internetage databases are currently being used by Google, Amazon, Facebook and many other major organizations operating in the era of Web 2.0. Different classes of NoSQL databases namely key-value pair, document, column-oriented and graph databases enable programmers to model the data closer to the format as used in their application. In this paper, data modeling and query syntax of relational and some classes of NoSQL databases have been explained with the help of an case study of a news website like Slashdot.",
"title": ""
},
{
"docid": "17c6b63d850292f5f1c78e156103c3b4",
"text": "Continual learning is the constant development of complex behaviors with no nal end in mind. It is the process of learning ever more complicated skills by building on those skills already developed. In order for learning at one stage of development to serve as the foundation for later learning, a continual-learning agent should learn hierarchically. CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development is proposed, described, tested, and evaluated in this dissertation. CHILD accumulates useful behaviors in reinforcement environments by using the Temporal Transition Hierarchies learning algorithm, also derived in the dissertation. This constructive algorithm generates a hierarchical, higher-order neural network that can be used for predicting context-dependent temporal sequences and can learn sequential-task benchmarks more than two orders of magnitude faster than competing neural-network systems. Consequently, CHILD can quickly solve complicated non-Markovian reinforcement-learning tasks and can then transfer its skills to similar but even more complicated tasks, learning these faster still. This continual-learning approach is made possible by the unique properties of Temporal Transition Hierarchies, which allow existing skills to be amended and augmented in precisely the same way that they were constructed in the rst place. Table of",
"title": ""
},
{
"docid": "cb929b640f8ee7b550512dd4d0dc8e17",
"text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.",
"title": ""
},
{
"docid": "4f3e37db8d656fe1e746d6d3a37878b5",
"text": "Shorter product life cycles and aggressive marketing, among other factors, have increased the complexity of sales forecasting. Forecasts are often produced using a Forecasting Support System that integrates univariate statistical forecasting with managerial judgment. Forecasting sales under promotional activity is one of the main reasons to use expert judgment. Alternatively, one can replace expert adjustments by regression models whose exogenous inputs are promotion features (price, display, etc.). However, these regression models may have large dimensionality as well as multicollinearity issues. We propose a novel promotional model that overcomes these limitations. It combines Principal Component Analysis to reduce the dimensionality of the problem and automatically identifies the demand dynamics. For items with limited history, the proposed model is capable of providing promotional forecasts by selectively pooling information across established products. The performance of the model is compared against forecasts provided by experts and statistical benchmarks, on weekly data; outperforming both substantially.",
"title": ""
},
{
"docid": "6f387b2c56b042815770605dc8fc9e8c",
"text": "Through investigating factors that influence consumers to make a transition from online to mobile banking, this empirical study shows that relative attitude and relative subjective norm positively motivated respondents to switch from Internet to mobile banking while relative perceived behavior control deterred respondents from transitioning. Empirical results also demonstrated that Internet banking is superior to mobile banking in terms of consumer relative compatibility, self-efficacy, resource facilitating conditions, and technology facilitating conditions. Meanwhile, mobile banking emerged as superior to Internet banking for other constructs. By adding a comparative concept into an extended decomposed theory of planned behavior (DTPB) model, this study may expand the applicable domain of current social psychology theories from the adoption of single products or services to the choice between competing products or services that achieve similar purposes and functions.",
"title": ""
},
{
"docid": "962858b6cbb3ae5c95d0018075fd0060",
"text": "By 2010, the worldwide annual production of plastics will surpass 300 million tons. Plastics are indispensable materials in modern society, and many products manufactured from plastics are a boon to public health (e.g., disposable syringes, intravenous bags). However, plastics also pose health risks. Of principal concern are endocrine-disrupting properties, as triggered for example by bisphenol A and di-(2-ethylhexyl) phthalate (DEHP). Opinions on the safety of plastics vary widely, and despite more than five decades of research, scientific consensus on product safety is still elusive. This literature review summarizes information from more than 120 peer-reviewed publications on health effects of plastics and plasticizers in lab animals and humans. It examines problematic exposures of susceptible populations and also briefly summarizes adverse environmental impacts from plastic pollution. Ongoing efforts to steer human society toward resource conservation and sustainable consumption are discussed, including the concept of the 5 Rs--i.e., reduce, reuse, recycle, rethink, restrain--for minimizing pre- and postnatal exposures to potentially harmful components of plastics.",
"title": ""
},
{
"docid": "655eb3f1d65f73120301f5cfcbaaffbf",
"text": "We present an unsupervised method for labelling the arguments of verbs with their semantic roles. Our bootstrapping algorithm makes initial unambiguous role assignments, and then iteratively updates the probability model on which future assignments are based. A novel aspect of our approach is the use of verb, slot, and noun class information as the basis for backing off in our probability model. We achieve 50–65% reduction in the error rate over an informed baseline, indicating the potential of our approach for a task that has heretofore relied on large amounts of manually generated training data.",
"title": ""
},
{
"docid": "cc92787280db22c46a159d95f6990473",
"text": "A novel formulation for the voltage waveforms in high efficiency linear power amplifiers is described. This formulation demonstrates that a constant optimum efficiency and output power can be obtained over a continuum of solutions by utilizing appropriate harmonic reactive impedance terminations. A specific example is confirmed experimentally. This new formulation has some important implications for the possibility of realizing broadband >10% high efficiency linear RF power amplifiers.",
"title": ""
},
{
"docid": "a7f68f0bd39decc3138c60d957a68a77",
"text": "Classification is an important data mining problem. Given a training database of records, each tagged with a class label, the goal of classification is to build a concise model that can be used to predict the class label of future, unlabeled records. A very popular class of classifiers are decision trees. All current algorithms to construct decision trees, including all main-memory algorithms, make one scan over the training database per level of the tree.\nWe introduce a new algorithm (BOAT) for decision tree construction that improves upon earlier algorithms in both performance and functionality. BOAT constructs several levels of the tree in only two scans over the training database, resulting in an average performance gain of 300% over previous work. The key to this performance improvement is a novel optimistic approach to tree construction in which we construct an initial tree using a small subset of the data and refine it to arrive at the final tree. We guarantee that any difference with respect to the “real” tree (i.e., the tree that would be constructed by examining all the data in a traditional way) is detected and corrected. The correction step occasionally requires us to make additional scans over subsets of the data; typically, this situation rarely arises, and can be addressed with little added cost.\nBeyond offering faster tree construction, BOAT is the first scalable algorithm with the ability to incrementally update the tree with respect to both insertions and deletions over the dataset. This property is valuable in dynamic environments such as data warehouses, in which the training dataset changes over time. The BOAT update operation is much cheaper than completely rebuilding the tree, and the resulting tree is guaranteed to be identical to the tree that would be produced by a complete re-build.",
"title": ""
},
{
"docid": "ff584005caf32a0d4e5b8101c0df43e3",
"text": "Computing resources including mobile devices at the edge of a network are increasingly connected and capable of collaboratively processing what's believed to be too complex to them. Collaboration possibilities with today's feature-rich mobile devices go far beyond simple media content sharing, traditional video conferencing and cloud-based software as a services. The realization of these possibilities for mobile edge computing (MEC) requires non-trivial amounts of efforts in enabling multi-device resource sharing. The current practice of mobile collaborative application development remains largely at the application level. In this paper, we present CollaboRoid, a platform-level solution that provides a set of system services for mobile collaboration. CollaboRoid's platform-level design significantly eases the development of mobile collaborative applications promoting MEC. In particular, it abstracts the sharing of not only hardware resources, but also software resources and multimedia contents between multiple heterogeneous mobile devices. We implement CollaboRoid in the application framework layer of the Android stack and evaluate it with several collaboration scenarios on Nexus 5 and 7 devices. Our experimental results show the feasibility of the platform-level collaboration using CollaboRoid in terms of the latency and energy consumption.",
"title": ""
},
{
"docid": "ba2d02d8c3e389b9b7659287eb406b16",
"text": "We propose and consolidate a definition of the discrete fractional Fourier transform that generalizes the discrete Fourier transform (DFT) in the same sense that the continuous fractional Fourier transform generalizes the continuous ordinary Fourier transform. This definition is based on a particular set of eigenvectors of the DFT matrix, which constitutes the discrete counterpart of the set of Hermite–Gaussian functions. The definition is exactlyunitary, index additive, and reduces to the DFT for unit order. The fact that this definition satisfies all the desirable properties expected of the discrete fractional Fourier transform supports our confidence that it will be accepted as the definitive definition of this transform.",
"title": ""
},
{
"docid": "748d71e6832288cd0120400d6069bf50",
"text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull",
"title": ""
},
{
"docid": "8b76618b089cc8b34cd2a01c775a4d3d",
"text": "a r t i c l e i n f o We present an overview of various edge and line oriented approaches to contour detection that have been proposed in the last two decades. By edge and line oriented we mean methods that do not rely on segmentation. Distinction is made between edges and contours. Contour detectors are divided in local and global operators. The former are mainly based on differential analysis, statistical approaches, phase congruency, rank order filters, and combinations thereof. The latter include computation of contour saliency, perceptual grouping, relaxation labeling and active contours. Important aspects are covered, such as preprocessing aimed to suppress texture and noise, multiresolution techniques, connections between computational models and properties of the human visual system, and use of shape priors. An overview of procedures and metrics for quantitative performance evaluation is also presented. Our main conclusion is that contour detection has reached high degree of sophistication, taking into account multimodal contour definition (by luminance, color or texture changes), mechanisms for reducing the contour masking influence of noise and texture, perceptual grouping, multiscale aspects and high-level vision information.",
"title": ""
},
{
"docid": "e3546095a5d0bb39755355c7a3acc875",
"text": "We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself. We present a novel approach to NMT which generates the target sentence by monotonically walking through the source sentence. Word reordering is modeled by operations which allow setting markers in the target sentence and move a target-side write head between those markers. In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability. Our technique can outperform a plain text system in terms of BLEU score under the recent Transformer architecture on JapaneseEnglish and Portuguese-English, and is within 0.5 BLEU difference on Spanish-English.",
"title": ""
},
{
"docid": "1022d96690f759a350295ce4eb1c217f",
"text": "This paper provides an overview of current types of CNTFETs and of some compact models. Using the available models, the influence of the parameters on the device characteristics was simulated and analyzed. The conclusion is that the tube diameter influences not only the current level, but also the threshold voltage of the CNTFET, while the contact resistance influences only the current level. From a designer's point of view, taking care of the parameter variations and in particular of the nanotube diameters is crucial to achieve reliable circuits",
"title": ""
},
{
"docid": "6d141d99945bfa55fe8cc187f8c1b864",
"text": "Many software development and maintenance tools involve matching between natural language words in different software artifacts (e.g., traceability) or between queries submitted by a user and software artifacts (e.g., code search). Because different people likely created the queries and various artifacts, the effectiveness of these tools is often improved by expanding queries and adding related words to textual artifact representations. Synonyms are particularly useful to overcome the mismatch in vocabularies, as well as other word relations that indicate semantic similarity. However, experience shows that many words are semantically similar in computer science situations, but not in typical natural language documents. In this paper, we present an automatic technique to mine semantically similar words, particularly in the software context. We leverage the role of leading comments for methods and programmer conventions in writing them. Our evaluation of our mined related comment-code word mappings that do not already occur in WordNet are indeed viewed as computer science, semantically-similar word pairs in high proportions.",
"title": ""
}
] |
scidocsrr
|
c3b03692920265b7c587743409ba6c79
|
The virtual geographies of social networks: a comparative analysis of Facebook, LinkedIn and ASmallWorld
|
[
{
"docid": "12e088ccb86094d58c682e4071cce0a6",
"text": "Are there systematic differences between people who use social network sites and those who stay away, despite a familiarity with them? Based on data from a survey administered to a diverse group of young adults, this article looks at the predictors of SNS usage, with particular focus on Facebook, MySpace, Xanga, and Friendster. Findings suggest that use of such sites is not randomly distributed across a group of highly wired users. A person's gender, race and ethnicity, and parental educational background are all associated with use, but in most cases only when the aggregate concept of social network sites is disaggregated by service. Additionally, people with more experience and autonomy of use are more likely to be users of such sites. Unequal participation based on user background suggests that differential adoption of such services may be contributing to digital inequality.",
"title": ""
}
] |
[
{
"docid": "5475df204bca627e73b077594af29d47",
"text": "Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.",
"title": ""
},
{
"docid": "7e8ecb859bd5854140640bc809114d9f",
"text": "Entanglement, according to Erwin Schrödinger the essence of quantum mechanics, is at the heart of the Einstein-Podolsky-Rosen paradox and of the so called quantum-nonlocality – the fact that a local realistic explanation of quantum mechanics is not possible as quantitatively expressed by violation of Bell’s inequalities. Even as entanglement gains increasing importance in most quantum information processing protocols, its conceptual foundation is still widely debated. Among the open questions are: What is the conceptual meaning of quantum entanglement? What are the most general constraints imposed by local realism? Which general quantum states violate these constraints? Developing Schrödinger’s ideas in an information-theoretic context we suggest that a natural understanding of quantum entanglement results when one accepts (1) that the amount of information per elementary system is finite and (2) that the information in a composite system resides more in the correlations than in properties of individuals. The quantitative formulation of these ideas leads to a rather natural criterion of quantum entanglement. Independently, extending Bell’s original ideas, we obtain a single general Bell inequality that summarizes all possible constraints imposed by local realism on the correlations for a multi-particle system. Violation of the general Bell inequality results in an independent general criterion for quantum entanglement. Most importantly, the two criteria agree in essence, though the two approaches are conceptually very different. This concurrence strongly supports the information-theoretic interpretation of quantum entanglement and of quantum physics in general.",
"title": ""
},
{
"docid": "80105a011097a3bd37bf58d030131e13",
"text": "Deep CNNs have achieved great success in text detection. Most of existing methods attempt to improve accuracy with sophisticated network design, while paying less attention on speed. In this paper, we propose a general framework for text detection called Guided CNN to achieve the two goals simultaneously. The proposed model consists of one guidance subnetwork, where a guidance mask is learned from the input image itself, and one primary text detector, where every convolution and non-linear operation are conducted only in the guidance mask. The guidance subnetwork filters out non-text regions coarsely, greatly reducing the computation complexity. At the same time, the primary text detector focuses on distinguishing between text and hard non-text regions and regressing text bounding boxes, achieving a better detection accuracy. A novel training strategy, called background-aware block-wise random synthesis, is proposed to further boost up the performance. We demonstrate that the proposed Guided CNN is not only effective but also efficient with two state-of-the-art methods, CTPN [52] and EAST [64], as backbones. On the challenging benchmark ICDAR 2013, it speeds up CTPN by 2.9 times on average, while improving the F-measure by 1.5%. On ICDAR 2015, it speeds up EAST by 2.0 times while improving the F-measure by 1.0%. c © 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. * Zhanghui Kuang is the corresponding author 2 YUE ET AL: BOOSTING UP SCENE TEXT DETECTORS WITH GUIDED CNN Figure 1: Illustration of guiding the primary text detector. Convolutions and non-linear operations are conducted only in the guidance mask indicated by the red and blue rectangles. The guidance mask (the blue) is expanded by backgroundaware block-wise random synthesis (the red) during training. When testing, the guidance mask is not expanded. Figure 2: Text appears very sparsely in scene images. The left shows one example image. The right shows the text area ratio composition of ICDAR 2013 test set. Images with (0%,10%], (10%,20%], (20%,30%], and (30%,40%] text region account for 57%, 21%, 11%, and 6% respectively. Only 5 % images have more than 40% text region. 57% 21% 11% 6% 5% (0.0,0.1] (0.1,0.2] (0.2,0.3] (0.3,0.4] (0.4,1.0]",
"title": ""
},
{
"docid": "ec989c3afdfebd6fe50dcb2205ac3ea3",
"text": "Recently, result diversification has attracted a lot of attention as a means to improve the quality of results retrieved by user queries. In this article, we introduce a novel definition of diversity called DisC diversity. Given a tuning parameter r, which we call radius, we consider two items to be similar if their distance is smaller than or equal to r. A DisC diverse subset of a result contains items such that each item in the result is represented by a similar item in the diverse subset and the items in the diverse subset are dissimilar to each other. We show that locating a minimum DisC diverse subset is an NP-hard problem and provide algorithms for its approximation. We extend our definition to the multiple radii case, where each item is associated with a different radius based on its importance, relevance, or other factors. We also propose adapting DisC diverse subsets to a different degree of diversification by adjusting r, that is, increasing the radius (or zooming-out) and decreasing the radius (or zooming-in). We present efficient implementations of our algorithms based on the M-tree, a spatial index structure, and experimentally evaluate their performance.",
"title": ""
},
{
"docid": "f22a95a07214cf77b72e6d6a64158474",
"text": "Children play games, chat with friends, tell stories, study history or math, and today this can all be done supported by new technologies. From the Internet to multimedia authoring tools, technology is changing the way children live and learn. As these new technologies become ever more critical to our children’s lives, we need to be sure these technologies support children in ways that make sense for them as young learners, explorers, and avid technology users. This may seem of obvious importance, because for almost 20 years the Human-Computer Interaction (HCI) community has pursued new ways to understand users of technology. However, with children as users, it has been difficult to bring them into the design process. Children go to school for most of their days; there are existing power structures, biases, and assumptions between adults and children to get beyond; and children, especially young ones have difficulty in verbalizing their thoughts. For all of these reasons, a child’s role in the design of new technology has historically been minimized. Based upon a survey of the literature and my own research experiences with children, this paper defines a framework for understanding the various roles children can have in the design process, and how these roles can impact technologies that are created.",
"title": ""
},
{
"docid": "2b2cd290f12d98667d6a4df12697a05e",
"text": "The chapter proposes three ways of integration of the two different worlds of relational and NoSQL databases: native, hybrid, and reducing to one option, either relational or NoSQL. The native solution includes using vendors’ standard APIs and integration on the business layer. In a relational environment, APIs are based on SQL standards, while the NoSQL world has its own, unstandardized solutions. The native solution means using the APIs of the individual systems that need to be connected, leaving to the businesslayer coding the task of linking and separating data in extraction and storage operations. A hybrid solution introduces an additional layer that provides SQL communication between the business layer and the data layer. The third integration solution includes vendors’ effort to foresee functionalities of “opposite” side, thus convincing developers’ community that their solution is sufficient.",
"title": ""
},
{
"docid": "8c4d4567cf772a76e99aa56032f7e99e",
"text": "This paper discusses current perspectives on play and leisure and proposes that if play and leisure are to be accepted as viable occupations, then (a) valid and reliable measures of play must be developed, (b) interventions must be examined for inclusion of the elements of play, and (c) the promotion of play and leisure must be an explicit goal of occupational therapy intervention. Existing tools used by occupational therapists to assess clients' play and leisure are evaluated for the aspects of play and leisure they address and the aspects they fail to address. An argument is presented for the need for an assessment of playfulness, rather than of play or leisure activities. A preliminary model for the development of such an assessment is proposed.",
"title": ""
},
{
"docid": "999ead7b9f02e4d2f9e3e81f61f37152",
"text": "Successful long-term settlements on the Moon will need a supply of resources such as oxygen and water, yet the process of regularly transporting these resources from Earth would be prohibitively costly and dangerous. One alternative would be an approach using heterogeneous, autonomous robotic teams, which could collect and extract these resources from the surrounding environment (In-Situ Resource Utilization). The Whegs™ robotic platform, with its demonstrated capability to negotiate obstacles and traverse irregular terrain, is a good candidate for a lunar rover concept. In this research, Lunar Whegs™ is constructed as a proof-of-concept rover that would be able to navigate the surface of the moon, collect a quantity of regolith, and transport it back to a central processing station. The robot incorporates an actuated scoop, specialized feet for locomotion on loose substrates, Light Detection and Ranging (LIDAR) obstacle sensing and avoidance, and sealing and durability features for operation in an abrasive environment.",
"title": ""
},
{
"docid": "4a16195478fcb1285ed5e5129a49199d",
"text": "BACKGROUND AND PURPOSE\nLittle research has been done regarding the attitudes and behaviors of physical therapists relative to the use of evidence in practice. The purposes of this study were to describe the beliefs, attitudes, knowledge, and behaviors of physical therapist members of the American Physical Therapy Association (APTA) as they relate to evidence-based practice (EBP) and to generate hypotheses about the relationship between these attributes and personal and practice characteristics of the respondents.\n\n\nMETHODS\nA survey of a random sample of physical therapist members of APTA resulted in a 48.8% return rate and a sample of 488 that was fairly representative of the national membership. Participants completed a questionnaire designed to determine beliefs, attitudes, knowledge, and behaviors regarding EBP, as well as demographic information about themselves and their practice settings. Responses were summarized for each item, and logistic regression analyses were used to examine relationships among variables.\n\n\nRESULTS\nRespondents agreed that the use of evidence in practice was necessary, that the literature was helpful in their practices, and that quality of patient care was better when evidence was used. Training, familiarity with and confidence in search strategies, use of databases, and critical appraisal tended to be associated with younger therapists with fewer years since they were licensed. Seventeen percent of the respondents stated they read fewer than 2 articles in a typical month, and one quarter of the respondents stated they used literature in their clinical decision making less than twice per month. The majority of the respondents had access to online information, although more had access at home than at work. According to the respondents, the primary barrier to implementing EBP was lack of time.\n\n\nDISCUSSION AND CONCLUSION\nPhysical therapists stated they had a positive attitude about EBP and were interested in learning or improving the skills necessary to implement EBP. They noted that they needed to increase the use of evidence in their daily practice.",
"title": ""
},
{
"docid": "799a7754fbcd9c5d42a0165448c89471",
"text": "• How accurate are people in judging\" traits of other users? • Are there systematic biases humans\" are subject to? • What are the implications of using\" human perception as a proxy for truth? • Which textual cues lead to a false \" perception of the truth? • Which textual cues make people\" more or less confident in their ratings? • Gender 2,607 authors, age – 826 authors • we use 100 tweets per author, 9 Mturk votes per author • URLs and mentions anonymized, English only filtered, duplicates eliminated, same 6 month time interval GENDER PERCEPTION",
"title": ""
},
{
"docid": "ef4272cd4b0d4df9aa968cc9ff528c1e",
"text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.",
"title": ""
},
{
"docid": "51b766b0a7f1e3bc1f49d16df04a69f7",
"text": "This study reports the results of a biometrical genetical analysis of scores on a personality inventory (The Eysenck Personality Questionnaire, or EPQ), which purports to measure psychoticism, neuroticism, extraversion and dissimulation (Lie Scale). The subjects were 544 pairs of twins, from the Maudsley Twin Register. The purpose of the study was to test the applicability of various genotypeenvironmental models concerning the causation of P scores. Transformation of the raw scores is required to secure a scale on which the effects of genes and environment are additive. On such a scale 51% of the variation in P is due to environmental differences within families, but the greater part (77%) of this environmental variation is due to random effects which are unlikely to be controllable. . The genetical consequences ot'assortative mating were too slight to be detectable in this study, and the genetical variation is consistent with the hypothesis that gene effects are additive. This is a general finding for traits which have been subjected to stabilizing selection. Our model for P is consistent with these advanced elsewhere to explain the origin of certain kinds of psychopathology. The data provide little support for the view that the \"family environment\" (including the environmental influence of parents) plays a major part in the determination of individual differences in P, though we cite evidence suggesting that sibling competition effects are producing genotypeenvironmental covariation for the determinants of P in males. The genetical and environmental determinants of the covariation of P with other personality dimensions are considered. Assumptions are discussed and tested where possible.",
"title": ""
},
{
"docid": "08df1d2819d021711ebfc60589d27e90",
"text": "Polyp has long been considered as one of the major etiologies to colorectal cancer which is a fatal disease around the world, thus early detection and recognition of polyps plays an crucial role in clinical routines. Accurate diagnoses of polyps through endoscopes operated by physicians becomes a chanllenging task not only due to the varying expertise of physicians, but also the inherent nature of endoscopic inspections. To facilitate this process, computer-aid techniques that emphasize on fully-conventional image processing and novel machine learning enhanced approaches have been dedicatedly designed for polyp detection in endoscopic videos or images. Among all proposed algorithms, deep learning based methods take the lead in terms of multiple metrics in evolutions for algorithmic performance. In this work, a highly effective model, namely the faster region-based convolutional neural network (Faster R-CNN) is implemented for polyp detection. In comparison with the reported results of the state-of-the-art approaches on polyps detection, extensive experiments demonstrate that the Faster R-CNN achieves very competing results, and it is an efficient approach for clinical practice.",
"title": ""
},
{
"docid": "06991ed314e4b5cbde8d09d137f69144",
"text": "In this paper, we study the problem of controlling chaos in a memristor-based Chua's circuit, which can be represented as a linear switched system. A linear switched controller is obtained by solving a set of LMIs based on a common Lyapunov function. Finally, a numerical simulation is provided to illustrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "0cc02773fd194c42071f8500a0c88261",
"text": "Neuroscientific and psychological data suggest a close link between affordance and mirror systems in the brain. However, we still lack a full understanding of both the individual systems and their interactions. Here, we propose that the architecture and functioning of the two systems is best understood in terms of two challenges faced by complex organisms, namely: (a) the need to select among multiple affordances and possible actions dependent on context and high-level goals and (b) the exploitation of the advantages deriving from a hierarchical organisation of behaviour based on actions and action-goals. We first review and analyse the psychological and neuroscientific literature on the mechanisms and processes organisms use to deal with these challenges. We then analyse existing computational models thereof. Finally we present the design of a computational framework that integrates the reviewed knowledge. The framework can be used both as a theoretical guidance to interpret empirical data and design new experiments, and to design computational models addressing specific problems debated in the literature.",
"title": ""
},
{
"docid": "1d4309cfff1aff77aa4882e355a807b9",
"text": "VLIW architectures are popular in embedded systems because they offer high-performance processing at low cost and energy. The major problem with traditional VLIW designs is that they do not scale efficiently due to bottlenecks that result from centralized resources and global communication. Multicluster designs have been proposed to solve the scaling problem of VLIW datapaths, while much less work has been done on the control path. In this paper, we propose a distributed control path architecture for VLIW processors (DVLIW) to overcome the scalability problem of VLIW control paths. The architecture simplifies the dispersal of complex VLIW instructions and supports efficient distribution of instructions through a limited bandwidth interconnect, while supporting compressed instruction encodings. DVLIW employs a multicluster design where each cluster contains a local instruction memory that provides all intra-cluster control. All clusters have their own program counter and instruction sequencing capabilities, thus instruction execution is completely decentralized. The architecture executes multiple instruction streams at the same time, but these streams collectively function as a single logical instruction stream. Simulation results show that DVLIW processors reduce the number of cross-chip control signals by approximately two orders of magnitude while incurring a small performance overhead to explicitly manage the instruction streams.",
"title": ""
},
{
"docid": "981cbb9140570a6a6f3d4f4f49cd3654",
"text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.",
"title": ""
},
{
"docid": "f181c3fe17392239e5feaef02c37dd11",
"text": "We present a formal model of synchronous processes without distinct identifiers (i.e., anonymous processes) that communicate using one-way public broadcasts. Our main contribution is a proof that the Bitcoin protocol achieves consensus in this model, except for a negligible probability, when Byzantine faults make up less than half the network. The protocol is scalable, since the running time and message complexity are all independent of the size of the network, instead depending only on the relative computing power of the faulty processes. We also introduce a requirement that the protocol must tolerate an arbitrary number of passive clients that receive broadcasts but can not send. This leads to a tight 2f + 1 resilience bound.",
"title": ""
},
{
"docid": "49a525fd20a4d53b17619e1c81696fce",
"text": "Patients irradiated for left-sided breast cancer have higher incidence of cardiovascular disease than those receiving irradiation for right-sided breast cancer. Most abnormalities were in the left anterior descending (LAD) coronary artery territory. We analyzed the relationships between preoperative examination results and irradiation dose to the LAD artery in patients with left-sided breast cancer. Seventy-one patients receiving breast radiotherapy were analyzed. The heart may rotate around longitudinal axis, showing either clockwise or counterclockwise rotation (CCWR). On electrocardiography, the transition zone (TZ) was judged in precordial leads. CCWR was considered to be present if TZ was at or to the right of V3. The prescribed dose was 50 Gy in 25 fractions. The maximum (Dmax) and mean (Dmean) doses to the LAD artery and the volumes of the LAD artery receiving at least 20 Gy, 30 Gy and 40 Gy (V20Gy, V30Gy and V40Gy, respectively) were significantly higher in CCWR than in the non-CCWR patients. On multivariate analysis, TZ was significantly associated with Dmax, Dmean, V20Gy, V30Gy, and V40Gy. CCWR is a risk factor for high-dose irradiation to the LAD artery. Electrocardiography is useful for evaluating the cardiovascular risk of high-dose irradiation to the LAD artery.",
"title": ""
}
] |
scidocsrr
|
4a6a3a51a7a864ca3de83ed4fd2f757d
|
Evolving Culture vs Local Minima
|
[
{
"docid": "eb3a212d81fd1d2ebd971a01e011d70d",
"text": "Humans and animals can perform much more complex tasks than they can acquire using pure trial and error learning. This gap is filled by teaching. One important method of instruction is shaping, in which a teacher decomposes a complete task into sub-components, thereby providing an easier path to learning. Despite its importance, shaping has not been substantially studied in the context of computational modeling of cognitive learning. Here we study the shaping of a hierarchical working memory task using an abstract neural network model as the target learner. Shaping significantly boosts the speed of acquisition of the task compared with conventional training, to a degree that increases with the temporal complexity of the task. Further, it leads to internal representations that are more robust to task manipulations such as reversals. We use the model to investigate some of the elements of successful shaping.",
"title": ""
}
] |
[
{
"docid": "9fdaddce26965be59f9d46d06fa0296a",
"text": "Using emotion detection technologies from biophysical signals, this study explored how emotion evolves during learning process and how emotion feedback could be used to improve learning experiences. This article also described a cutting-edge pervasive e-Learning platform used in a Shanghai online college and proposed an affective e-Learning model, which combined learners’ emotions with the Shanghai e-Learning platform. The study was guided by Russell’s circumplex model of affect and Kort’s learning spiral model. The results about emotion recognition from physiological signals achieved a best-case accuracy (86.3%) for four types of learning emotions. And results from emotion revolution study showed that engagement and confusion were the most important and frequently occurred emotions in learning, which is consistent with the findings from AutoTutor project. No evidence from this study validated Kort’s learning spiral model. An experimental prototype of the affective e-Learning model was built to help improve students’ learning experience by customizing learning material delivery based on students’ emotional state. Experiments indicated the superiority of emotion aware over non-emotion-aware with a performance increase of 91%.",
"title": ""
},
{
"docid": "d8a68a9e769f137e06ab05e4d4075dce",
"text": "The inelastic response of existing reinforced concrete (RC) buildings without seismic details is investigated, presenting the results from more than 1000 nonlinear analyses. The seismic performance is investigated for two buildings, a typical building form of the 60s and a typical form of the 80s. Both structures are designed according to the old Greek codes. These building forms are typical for that period for many Southern European countries. Buildings of the 60s do not have seismic details, while buildings of the 80s have elementary seismic details. The influence of masonry infill walls is also investigated for the building of the 60s. Static pushover and incremental dynamic analyses (IDA) for a set of 15 strong motion records are carried out for the three buildings, two bare and one infilled. The IDA predictions are compared with the results of pushover analysis and the seismic demand according to Capacity Spectrum Method (CSM) and N2 Method. The results from IDA show large dispersion on the response, available ductility capacity, behaviour factor and failure displacement, depending on the strong motion record. CSM and N2 predictions are enveloped by the nonlinear dynamic predictions, but have significant differences from the mean values. The better behaviour of the building of the 80s compared to buildings of the 60s is validated with both pushover and nonlinear dynamic analyses. Finally, both types of analysis show that fully infilled frames exhibit an improved behaviour compared to bare frames.",
"title": ""
},
{
"docid": "6f99852d599ee533da2a7c58f9b90c42",
"text": "Searching for Web service access points is no longer attached to service registries as Web search engines have become a new major source for discovering Web services. In this work, we conduct a thorough analytical investigation on the plurality of Web service interfaces that exist on the Web today. Using our Web Service Crawler Engine (WSCE), we collect metadata service information on retrieved interfaces through accessible UBRs, service portals and search engines. We use this data to determine Web service statistics and distribution based on object sizes, types of technologies employed, and the number of functioning services. This statistical data can be used to help determine the current status of Web services. We determine an intriguing result that 63% of the available Web services on the Web are considered to be active. We further use our findings to provide insights on improving the service retrieval process.",
"title": ""
},
{
"docid": "8eb96feea999ce77f2b56b7941af2587",
"text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cddcdaf283c8c71d34dccf179d7b33da",
"text": "Automatic traffic sign detection and recognition is a field of computer vision which is very important aspect for advanced driver support system. This paper proposes a framework that will detect and classify different types of traffic signs from images. The technique consists of two main modules: road sign detection, and classification and recognition. In the first step, colour space conversion, colour based segmentation are applied to find out if a traffic sign is present. If present, the sign will be highlighted, normalized in size and then classified. Neural network is used for classification purposes. For evaluation purpose, four type traffic signs such as Stop Sign, No Entry Sign, Give Way Sign, and Speed Limit Sign are used. Altogether 300 sets images, 75 sets for each type are used for training purposes. 200 images are used testing. The experimental results show the detection rate is above 90% and the accuracy of recognition is more than 88%.",
"title": ""
},
{
"docid": "5e435e0bd1ebdd1f86b57e40fc047366",
"text": "Deep clustering is a recently introduced deep learning architecture that uses discriminatively trained embeddings as the basis for clustering. It was recently applied to spectrogram segmentation, resulting in impressive results on speaker-independent multi-speaker separation. In this paper we extend the baseline system with an end-to-end signal approximation objective that greatly improves performance on a challenging speech separation. We first significantly improve upon the baseline system performance by incorporating better regularization, larger temporal context, and a deeper architecture, culminating in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB compared to the baseline of 6.0 dB for two-speaker separation, as well as a 7.1 dB SDR improvement for three-speaker separation. We then extend the model to incorporate an enhancement layer to refine the signal estimates, and perform end-to-end training through both the clustering and enhancement stages to maximize signal fidelity. We evaluate the results using automatic speech recognition. The new signal approximation objective, combined with end-to-end training, produces unprecedented performance, reducing the word error rate (WER) from 89.1% down to 30.8%. This represents a major advancement towards solving the cocktail party problem.",
"title": ""
},
{
"docid": "3dd518c87372b51a9284e4b8aa2e4fb4",
"text": "Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static structures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio- temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background modeling and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic texture histograms which combine spatial texture and temporal motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes.",
"title": ""
},
{
"docid": "e1f163487e5d6b2d781dc313426403e7",
"text": "Inadvertent data disclosure by insiders is considered as one of the biggest threats for corporate information security. Data loss prevention systems typically try to cope with this problem by monitoring access to confidential data and preventing their leakage or improper handling. Current solutions in this area, however, often provide limited means to enforce more complex security policies that for instance specify temporal or cardinal constraints on the execution of events. This paper presents UC4Win, a data loss prevention solution for Microsoft Windows operating systems that is based on the concept of data-driven usage control to allow such a fine-grained policy-based protection. UC4Win is capable of detecting and controlling data-loss related events at the level of individual function calls. This is done with function call interposition techniques to intercept application calls to the Windows API in combination with methods to track the flows of confidential data through the system.",
"title": ""
},
{
"docid": "576d911990bb207eebaaca6ab137cc7a",
"text": "The online fingerprints by biometric system is not widely used now a days and there is less scope as user is friendly with the system. This paper represents a framework and applying the latent fingerprints obtained from the crime scene. These prints would be matched with our database and we identify the criminal. For this process we have to get the fingerprints of all the citizens. This technique may reduce the crime to a large extent. Latent prints are different from the patent prints. These fingerprints are found at the time of crime and these fingerprints are left accidentally. By this approach we collect these fingerprints by chemicals, powder, lasers and other physical means. Sometimes, fingerprints have a broken curve and it is not so clear due to low pressure. We apply the M_join algorithm to join the curve to achieve better results. Thus, our proposed approach eliminates the pseudo minutiae and joins the broken curves in fingerprints.",
"title": ""
},
{
"docid": "05eb344fb8b671542f6f0228774a5524",
"text": "This paper presents an improved hardware structure for the computation of the Whirlpool hash function. By merging the round key computation with the data compression and by using embedded memories to perform part of the Galois Field (28) multiplication, a core can be implemented in just 43% of the area of the best current related art while achieving a 12% higher throughput. The proposed core improves the Throughput per Slice compared to the state of the art by 160%, achieving a throughput of 5.47 Gbit/s with 2110 slices and 32 BRAMs on a VIRTEX II Pro FPGA. Results for a real application are also presented by considering a polymorphic computational approach.",
"title": ""
},
{
"docid": "a5614379a447180fe0ab5ab83770dafb",
"text": "This paper presents a novel method for performing an efficient cost aggregation in stereo matching. The cost aggregation problem is re-formulated with a perspective of a histogram, and it gives us a potential to reduce the complexity of the cost aggregation significantly. Different from the previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy which exists among the search range, caused by a repeated filtering for all disparity hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The trade-off between accuracy and complexity is extensively investigated into parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity. This work provides new insights into complexity-constrained stereo matching algorithm design.",
"title": ""
},
{
"docid": "84b0d19d5d383ea3fd99e20740ebf5d6",
"text": "We propose a robust proactive threshold signature scheme, a multisignature scheme and a blind signature scheme which work in any Gap Diffie-Hellman (GDH) group (where the Computational Diffie-Hellman problem is hard but the Decisional Diffie-Hellman problem is easy). Our constructions are based on the recently proposed GDH signature scheme of Boneh et al. [BLS]. Due to the nice properties of GDH groups and of the base scheme, it turns out that most of our constructions are much simpler, more efficient and have more useful characteristics than similar existing constructions. We support all the proposed schemes with proofs under the appropriate computational assumptions, using the corresponding notions of security.",
"title": ""
},
{
"docid": "90fe763855ca6c4fabe4f9d042d5c61a",
"text": "While learning models of intuitive physics is an increasingly active area of research, current approaches still fall short of natural intelligences in one important regard: they require external supervision, such as explicit access to physical states, at training and sometimes even at test times. Some authors have relaxed such requirements by supplementing the model with an handcrafted physical simulator. Still, the resulting methods are unable to automatically learn new complex environments and to understand physical interactions within them. In this work, we demonstrated for the first time learning such predictors directly from raw visual observations and without relying on simulators. We do so in two steps: first, we learn to track mechanically-salient objects in videos using causality and equivariance, two unsupervised learning principles that do not require auto-encoding. Second, we demonstrate that the extracted positions are sufficient to successfully train visual motion predictors that can take the underlying environment into account. We validate our predictors on synthetic datasets; then, we introduce a new dataset, ROLL4REAL, consisting of real objects rolling on complex terrains (pool table, elliptical bowl, and random height-field). We show that in all such cases it is possible to learn reliable extrapolators of the object trajectories from raw videos alone, without any form of external supervision and with no more prior knowledge than the choice of a convolutional neural network architecture.",
"title": ""
},
{
"docid": "e7bd07b86b8f1b50641853c06461ce89",
"text": "Purpose – The purpose of this study is to conduct a scientometric analysis of the body of literature contained in 11 major knowledge management and intellectual capital (KM/IC) peer-reviewed journals. Design/methodology/approach – A total of 2,175 articles published in 11 major KM/IC peer-reviewed journals were carefully reviewed and subjected to scientometric data analysis techniques. Findings – A number of research questions pertaining to country, institutional and individual productivity, co-operation patterns, publication frequency, and favourite inquiry methods were proposed and answered. Based on the findings, many implications emerged that improve one’s understanding of the identity of KM/IC as a distinct scientific field. Research limitations/implications – The pool of KM/IC journals examined did not represent all available publication outlets, given that at least 20 peer-reviewed journals exist in the KM/IC field. There are also KM/IC papers published in other non-KM/IC specific journals. However, the 11 journals that were selected for the study have been evaluated by Bontis and Serenko as the top publications in the KM/IC area. Practical implications – Practitioners have played a significant role in developing the KM/IC field. However, their contributions have been decreasing. There is still very much a need for qualitative descriptions and case studies. It is critically important that practitioners consider collaborating with academics for richer research projects. Originality/value – This is the most comprehensive scientometric analysis of the KM/IC field ever conducted.",
"title": ""
},
{
"docid": "40413aa7fd92e042b8c359b2cf6d2d23",
"text": "Text summarization is the process of creating a short description of a specified text while preserving its information context. This paper tackles Arabic text summarization problem. The semantic redundancy and insignificance will be removed from the summarized text. This can be achieved by checking the text entailment relation, and lexical cohesion. Accordingly, a text summarization approach (called LCEAS) based on lexical cohesion and text entailment relation is developed. In LCEAS, text entailment approach is enhanced to suit Arabic language. Roots and semantic-relations are used between the senses of the words to extract the common words. New threshold values are specified to suit entailment based segmentation for Arabic text. LCEAS is a single document summarization, which is constructed using extraction technique. To evaluate LCEAS, its performance is compared with previous Arabic text summarization systems. Each system output is compared against Essex Arabic Summaries Corpus (EASC) corpus (the model summaries), using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Automatic Summarization Engineering (AutoSummEng) metrics. The outcome of LCEAS indicates that the developed approach outperforms the previous Arabic text summarization systems. KeywordsText Summarization; Text Segmentation; Lexical Cohesion; Text Entailment; Natural Language Processing.",
"title": ""
},
{
"docid": "38e9aa4644edcffe87dd5ae497e99bbe",
"text": "Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.",
"title": ""
},
{
"docid": "477ab18817f247b9f17fb78b5ac08dbf",
"text": "Ray marching, also known as sphere tracing, is an efficient empirical method for rendering implicit surfaces using distance fields. The method marches along the ray with step lengths, provided by the distance field, that are guaranteed not to penetrate the scene. As a result, it provides an efficient method of rendering implicit surfaces, such as constructive solid geometry, recursive shapes, and fractals, as well as producing cheap empirical visual effects, such as ambient occlusion, subsurface scattering, and soft shadows. The goal of this project is to bring interactive ray marching to the web platform. The project will focus on the robustness of the render itself. It should run with reasonable performance in real-time and provide an interface where the user can interactively change the viewing angle and modify rendering options. It is also expected to run on the latest WebGL supported browser, on any machine. CR Categories: I.3.3 [Computer Graphics]: Three-Dimensional Graphics and Realism—Display Algorithms",
"title": ""
},
{
"docid": "850cb2c41ef9e42df458156c4000f507",
"text": "A VANET is a network where each node represents a vehicle equipped with wireless communication technology. This type of network enhances road safety, traffic efficiency, Internet access and many others applications to minimize environmental impact and in general maximize the benefits for the road users. This paper studies a relevant problem in VANETs, known as the deployment of RSUs. A RSU is an access points, used together with the vehicles, to allow information dissemination in the roads. Knowing where to place these RSUs so that a maximum number of vehicles circulating is covered is a challenge. We model the problem as a Maximum Coverage with Time Threshold Problem (MCTTP), and use a genetic algorithm to solve it. The algorithm is tested in four real-world datasets, and compared to a greedy approach previously proposed in the literature. The results show that our approach finds better results than the greedy in all scenarios, with gains up to 11 percentage points.",
"title": ""
},
{
"docid": "8851c4383b10db7b0482eaf9417149ae",
"text": "There are many difficulties associated with developing correct multithreaded software, and many of the activities that are simple for single threaded software are exceptionally hard for multithreaded software. One such example is constructing unit tests involving multiple threads. Given, for example, a blocking queue implementation, writing a test case to show that it blocks and unblocks appropriately using existing testing frameworks is exceptionally hard. In this paper, we describe the MultithreadedTC framework which allows the construction of deterministic and repeatable unit tests for concurrent abstractions. This framework is not designed to test for synchronization errors that lead to rare probabilistic faults under concurrent stress. Rather, this framework allows us to demonstrate that code does provide specific concurrent functionality (e.g., a thread attempting to acquire a lock is blocked if another thread has the lock).\n We describe the framework and provide empirical comparisons against hand-coded tests designed for Sun's Java concurrency utilities library and against previous frameworks that addressed this same issue. The source code for this framework is available under an open source license.",
"title": ""
}
] |
scidocsrr
|
ab4534e8f8b5f3ae7d1e7b5bb1097d67
|
R3MC: A Riemannian three-factor algorithm for low-rank matrix completion
|
[
{
"docid": "215ccfeaf75d443e8eb6ead8172c9b92",
"text": "Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.",
"title": ""
}
] |
[
{
"docid": "f6e8eda4fa898a24f3a7d1116e49f42c",
"text": "This is the eBook of the printed book and may not include any media, website access codes, or print supplements that may come packaged with the bound book. Search Engines: Information Retrieval in Practice is ideal for introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. It is also a valuable tool for search engine and information retrieval professionals. В Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice , is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines.В Coverage of the underlying IR and mathematical models reinforce key concepts. The bookвЂTMs numerous programming exercises make extensive use of Galago, a Java-based open source search engine.",
"title": ""
},
{
"docid": "969b49b20271f2714ad96d739bf79f08",
"text": "Control of a robot manipulator in contact with the environment is usually conducted by the direct feedback control system using a force-torque sensor or the indirect impedance control scheme. Although these methods have been successfully applied to many applications, simultaneous control of force and position cannot be achieved. Furthermore, collision safety has been of primary concern in recent years with emergence of service robots in direct contact with humans. To cope with such problems, redundant actuation has been used to enhance the performance of a position/force controller. In this paper, the novel design of a double actuator unit (DAU) composed of double actuators and a planetary gear train is proposed to provide the capability of simultaneous control of position and force as well as the improved collision safety. Since one actuator controls position and the other actuator modulates stiffness, DAU can control the position and stiffness simultaneously at the same joint. The torque exerted on the joint can be estimated without an expensive torque/force sensor. DAU is capable of detecting dynamic collision by monitoring the speed of the stiffness modulator. Upon detection of dynamic collision, DAU immediately reduces its joint stiffness according to the collision magnitude, thus providing the optimum collision safety. It is shown from various experiments that DAU can provide good performance of position tracking, force estimation and collision safety.",
"title": ""
},
{
"docid": "e2d647ae5e758796069a36ffa44cf90a",
"text": "We describe a Genetic Algorithm that can evolve complete programs. Using a variable length linear genome to govern how a Backus Naur Form grammar deenition is mapped to a program, expressions and programs of arbitrary complexity may be evolved. Other automatic programming methods are described, before our system, Grammatical Evolution, is applied to a symbolic regression problem.",
"title": ""
},
{
"docid": "d639f6b922e24aca7229ce561e852b31",
"text": "As digital video becomes more pervasive, e cient ways of searching and annotating video according to content will be increasingly important. Such tasks arise, for example, in the management of digital video libraries for content-based retrieval and browsing. In this paper, we develop tools based on camera motion for analyzing and annotating a class of structured video using the low-level information available directly from MPEG compressed video. In particular, we show that in certain structured settings it is possible to obtain reliable estimates of camera motion by directly processing data easily obtained from the MPEG format. Working directly with the compressed video greatly reduces the processing time and enhances storage e ciency. As an illustration of this idea, we have developed a simple basketball annotation system which combines the low-level information extracted from an MPEG stream with the prior knowledge of basketball structure to provide high level content analysis, annotation and browsing for events such as wide-angle and close-up views, fast breaks, probable shots at the basket, etc. The methods used in this example should also be useful in the analysis of high-level content of structured video in other domains.",
"title": ""
},
{
"docid": "cb815a01960490760e2ac581e26f4486",
"text": "To solve the weakly-singular Volterra integro-differential equations, the combined method of the Laplace Transform Method and the Adomian Decomposition Method is used. As a result, series solutions of the equations are constructed. In order to explore the rapid decay of the equations, the pade approximation is used. The results present validity and great potential of the method as a powerful algorithm in order to present series solutions for singular kind of differential equations.",
"title": ""
},
{
"docid": "f5076644c68ec6261fab541066ad6df5",
"text": "Social media channels, such as Facebook or Twitter, allow for people to express their views and opinions about any public topics. Public sentiment related to future events, such as demonstrations or parades, indicate public attitude and therefore may be applied while trying to estimate the level of disruption and disorder during such events. Consequently, sentiment analysis of social media content may be of interest for different organisations, especially in security and law enforcement sectors. This paper presents a new lexicon-based sentiment analysis algorithm that has been designed with the main focus on real time Twitter content analysis. The algorithm consists of two key components, namely sentiment normalisation and evidence-based combination function, which have been used in order to estimate the intensity of the sentiment rather than positive/negative label and to support the mixed sentiment classification process. Finally, we illustrate a case study examining the relation between negative sentiment of twitter posts related to English Defence League and the level of disorder during the organisation’s related events.",
"title": ""
},
{
"docid": "1585d7e1f1e6950949dc954c2d0bba51",
"text": "The state-of-the-art techniques for aspect-level sentiment analysis focus on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their practical performance may fall short of expectations due to semantic complexity of natural languages. Motivated by the observation that linguistic hints (e.g. explicit sentiment words and shift words) can be strong indicators of sentiment, we present a joint framework, SenHint, which integrates the output of deep neural networks and the implication of linguistic hints into a coherent reasoning model based on Markov Logic Network (MLN). In SenHint, linguistic hints are used in two ways: (1) to identify easy instances, whose sentiment can be automatically determined by machine with high accuracy; (2) to capture implicit relations between aspect polarities. We also empirically evaluate the performance of SenHint on both English and Chinese benchmark datasets. Our experimental results show that SenHint can effectively improve accuracy compared with the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "abdacded192227f3eb8d328c650507a4",
"text": "We propose a novel integrated fog cloud IoT (IFCIoT) architectural paradigm that promises increased performance, energy efficiency, reduced latency, quicker response time, scalability, and better localized accuracy for future IoT applications. The fog nodes (e.g., edge servers, smart routers, base stations) receive computation offloading requests and sensed data from various IoT devices. To enhance performance, energy efficiency, and real-time responsiveness of applications, we propose a reconfigurable and layered fog node (edge server) architecture that analyzes the applications’ characteristics and reconfigure the architectural resources to better meet the peak workload demands. The layers of the proposed fog node architecture include application layer, analytics layer, virtualization layer, reconfiguration layer, and hardware layer. The layered architecture facilitates abstraction and implementation for fog computing paradigm that is distributed in nature and where multiple vendors (e.g., applications, services, data and content providers) are involved. We also elaborate the potential applications of IFCIoT architecture, such as smart cities, intelligent transportation systems, localized weather maps and environmental monitoring, and real-time agricultural data analytics and control. Index Terms —Fog computing, edge computing, Internet of things, reconfigurable architecture, radio access network",
"title": ""
},
{
"docid": "21378678c661aa581c7331b16ae398ff",
"text": "Automated topic labelling brings benefits for users aiming at analysing and understanding document collections, as well as for search engines targetting at the linkage between groups of words and their inherent topics. Current approaches to achieve this suffer in quality, but we argue their performances might be improved by setting the focus on the structure in the data. Building upon research for concept disambiguation and linking to DBpedia, we are taking a novel approach to topic labelling by making use of structured data exposed by DBpedia. We start from the hypothesis that words co-occuring in text likely refer to concepts that belong closely together in the DBpedia graph. Using graph centrality measures, we show that we are able to identify the concepts that best represent the topics. We comparatively evaluate our graph-based approach and the standard text-based approach, on topics extracted from three corpora, based on results gathered in a crowd-sourcing experiment. Our research shows that graph-based analysis of DBpedia can achieve better results for topic labelling in terms of both precision and topic coverage.",
"title": ""
},
{
"docid": "6ff6f2b0c7e7308ec6d7acdf4c3e5a47",
"text": "Event-related brain potentials (ERPs) were recorded from participants listening to or reading sentences that were correct, contained a violation of the required syntactic category, or contained a syntactic-category ambiguity. When sentences were presented auditorily (Experiment 1), there was an early left anterior negativity for syntactic-category violations, but not for syntactic-category ambiguities. Both anomaly types elicited a late centroparietally distributed positivity. When sentences were presented visually word by word (Experiment 2), again an early left anterior negativity was found only for syntactic-category violations, and both types of anomalies elicited a late positivity. The combined data are taken to be consistent with a 2-stage model of parsing, including a 1st stage, during which an initial phrase structure is built and a 2nd stage, during which thematic role assignment and, if necessary, reanalysis takes place. Disruptions to the 1st stage of syntactic parsing appear to be correlated with an early left anterior negativity, whereas disruptions to the 2nd stage might be correlated with a late posterior distributed positivity.",
"title": ""
},
{
"docid": "12840153a7f2be146a482ed78e7822a6",
"text": "We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost. ! 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "24c62c2660ece8c0c724f745cb050964",
"text": "Face detection is a classical problem in computer vision. It is still a difficult task due to many nuisances that naturally occur in the wild. In this paper, we propose a multi-scale fully convolutional network for face detection. To reduce computation, the intermediate convolutional feature maps (conv) are shared by every scale model. We up-sample and down-sample the final conv map to approximate K levels of a feature pyramid, leading to a wide range of face scales that can be detected. At each feature pyramid level, a FCN is trained end-to-end to deal with faces in a small range of scale change. Because of the up-sampling, our method can detect very small faces (10×10 pixels). We test our MS-FCN detector on four public face detection datasets, including FDDB, WIDER FACE, AFW and PASCAL FACE. Extensive experiments show that it outperforms state-of-the-art methods. Also, MS-FCN runs at 23 FPS on a GPU for images of size 640×480 with no assumption on the minimum detectable face size.",
"title": ""
},
{
"docid": "9852e00f24fd8f626a018df99bea5f1f",
"text": "Business Process Reengineering is a discipline in which extensive research has been carried out and numerous methodologies churned out. But what seems to be lacking is a structured approach. In this paper we provide a review of BPR and present ‘best of breed ‘ methodologies from contemporary literature and introduce a consolidated, systematic approach to the redesign of a business enterprise. The methodology includes the five activities: Prepare for reengineering, Map and Analyze As-Is process, Design To-be process, Implement reengineered process and Improve continuously.",
"title": ""
},
{
"docid": "6115bdfac611e5e2423784118471fbd8",
"text": "We address the problem of minimizing the communication involved in the exchange of similar documents. We consider two users, A and B, who hold documents x and y respectively. Neither of the users has any information about the other’s document. They exchange messages so that B computes x; it may be required that A compute y as well. Our goal is to design communication protocols with the main objective of minimizing the total number of bits they exchange; other objectives are minimizing the number of rounds and the complexity of internal computations. An important notion which determines the efficiency of the protocols is how one measures the distance between x and y. We consider several metrics for measuring this distance, namely the Hamming metric, the Levenshtein metric (edit distance), and a new LZ metric, which is introduced in this paper. We show how to estimate the distance between x and y using a single message of logarithmic size. For each metric, we present the first communication-efficient protocols, which often match the corresponding lower bounds. In consequence, we obtain error-correcting codes for these error models which correct up to d errors in n characters using O(d·polylog(n)) bits. Our most interesting methods use a new transformation from LZ distance to Hamming distance.",
"title": ""
},
{
"docid": "a4865029148c6803b26d40723c89ff93",
"text": "Introduction One of the greatest challenges in cosmetic rhinoplasty is the overly thick nasal skin envelope. In addition to exacerbating unwanted nasal width, thick nasal skin is a major impediment to aesthetic refinement of the nose. Owing to its bulk, noncompliance, and tendency to scar, overly thick skin frequently obscures topographic definition of the nasal framework, thereby limiting or negating cosmetic improvements. Masking of the skeletal contour is usually most evident following aggressive reduction rhinoplasty where overly thick and noncompliant nasal skin fails to shrink and conform to the smaller skeletal framework. The result is excessive subcutaneous dead space leading to further fibrotic thickening of the already bulky nasal covering. Despite the decrease in nasal size, the resulting nasal contour is typically amorphous, ill-defined and devoid of beauty and elegance. To optimize cosmetic results in thick-skinned noses, contour enhancement is best achieved by elongating and projecting the skeletal framework whenever possible (Figure 1). Skeletal augmentation not only reduces dead space to minimize fibrotic thickening, it also stretches and thins the outer soft-tissue covering for improved surface definition. However, in noses in which the nasal framework is already too large, skeletal augmentation is not a viable option, and the overly thick skin envelope must be surgically thinned to achieve better skin contractility and improved cosmetic outcomes. Histologic examination of overly thick nasal tip skin reveals comparatively little dermal thickening or increased adipose content but ratherasubstantial increaseinthicknessofthesubcutaneousfibromuscular tissues.1 Dubbed the “nasal SMAS” layer,2 the fibromuscular tissue layer lies just beneath the subdermal fat and may account for an additional 2 to 3 mm of skin flap thickness. Owing to a discrete dissection plane separating the nasal SMAS layer from the overlying subdermal fat, surgical excision of the hypertrophic nasal SMAS layer can be performed safely in healthy candidates using the external rhinoplasty approach.3 However,theoverlyingsubdermalplexus(containedwithin the subdermal fat) must be carefully protected.3-5 Similarly, inadvertent disruption of the paired lateral nasal arteries—major feeding vessels to the subdermal plexus—must also be avoided, and special care should be exercised when working near the alar crease.3-5 SMAS debulking is also contraindicated in skin less than 3-mm thick because overly aggressive surgical debulking may lead to unsightly prominence of the skeletal topography. However, in the appropriate patient, SMAS debulking can reduce skin envelope thickness by as much as 3.0 mm, with greater reductions common in revision rhinoplasty cases when vascularity permits.6",
"title": ""
},
{
"docid": "4bba56323edd0d2bc1baca07c1cee14e",
"text": "In this paper, we propose Personalized Markov Embedding (PME), a next-song recommendation strategy for online karaoke users. By modeling the sequential singing behavior, we first embed songs and users into a Euclidean space in which distances between songs and users reflect the strength of their relationships. Then, given each user's last song, we can generate personalized recommendations by ranking the candidate songs according to the embedding. Moreover, PME can be trained without any requirement of content information. Finally, we perform an experimental evaluation on a real world data set provided by ihou.com which is an online karaoke website launched by iFLYTEK, and the results clearly demonstrate the effectiveness of PME.",
"title": ""
},
{
"docid": "45a24862022bbc1cf3e33aea1e4f8b12",
"text": "Biohybrid consists of a living organism or cell and at least one engineered component. Designing robot-plant biohybrids is a great challenge: it requires interdisciplinary reconsideration of capabilities intimate specific to the biology of plants. Envisioned advances should improve agricultural/horticultural/social practice and could open new directions in utilization of plants by humans. Proper biohybrid cooperation depends upon effective communication. During evolution, plants developed many ways to communicate with each other, with animals, and with microorganisms. The most notable examples are: the use of phytohormones, rapid long-distance signaling, gravity, and light perception. These processes can now be intentionally re-shaped to establish plant-robot communication. In this article, we focus on plants physiological and molecular processes that could be used in bio-hybrids. We show phototropism and biomechanics as promising ways of effective communication, resulting in an alteration in plant architecture, and discuss the specifics of plants anatomy, physiology and development with regards to the bio-hybrids. Moreover, we discuss ways how robots could influence plants growth and development and present aims, ideas, and realized projects of plant-robot biohybrids.",
"title": ""
},
{
"docid": "44ca351c024e61b06b1709ba0e4db44f",
"text": "Rootkits affect system security by modifying kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify noncontrol data. Most prior techniques for rootkit detection have focused solely on detecting control data modifications and, therefore, fail to detect such rootkits. This paper presents a novel technique to detect rootkits that modify both control and noncontrol data. The main idea is to externally observe the execution of the kernel during an inference phase and hypothesize invariants on kernel data structures. A rootkit detection phase uses these invariants as specifications of data structure integrity. During this phase, violation of invariants indicates an infection. We have implemented Gibraltar, a prototype tool that infers kernel data structure invariants and uses them to detect rootkits. Experiments show that Gibraltar can effectively detect previously known rootkits, including those that modify noncontrol data structures.",
"title": ""
},
{
"docid": "1a7bdb641bc9b52a1e48e2d6842bf5aa",
"text": "Sales of a brand are determined by measures such as how many customers buy the brand, how often, and how much they also buy other brands. Scanner panel operators routinely report these ‘‘brand performance measures’’ (BPMs) to their clients. In this position paper, we consider how to understand, interpret, and use these measures. The measures are shown to follow well-established patterns. One is that big and small brands differ greatly in how many buyers they have, but usually far less in how loyal these buyers are. The Dirichlet model predicts these patterns. It also provides a broader framework for thinking about all competitive repeat-purchase markets—from soup to gasoline, prescription drugs to aviation fuel, where there are large and small brands, and light and heavy buyers, in contexts as diverse as the United States, United Kingdom, Japan, Germany, and Australasia. Numerous practical uses of the framework are illustrated: auditing the performance of established brands, predicting and evaluating the performance of new brands, checking the nature of unfamiliar markets, of partitioned markets, and of dynamic market situations more generally (where the Dirichlet provides theoretical benchmarks for price promotions, advertising, etc.). In addition, many implications for our understanding of consumers, brands, and the marketing mix logically follow from the Dirichlet framework. In repeat-purchase markets, there is often a lack of segmentation between brands and the typical consumer exhibits polygamous buying behavior (though there might be strong segmentation at the category level). An understanding of these applications and implications leads to consumer insights, imposes constraints on marketing action, and provides norms for evaluating brands and for assessing marketing initiatives. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d0d3ea7c5497070ca2a7e9f904a3c515",
"text": "Fairness in algorithmic decision-making processes is attracting increasing concern. When an algorithm is applied to human-related decisionmaking an estimator solely optimizing its predictive power can learn biases on the existing data, which motivates us the notion of fairness in machine learning. while several different notions are studied in the literature, little studies are done on how these notions affect the individuals. We demonstrate such a comparison between several policies induced by well-known fairness criteria, including the color-blind (CB), the demographic parity (DP), and the equalized odds (EO). We show that the EO is the only criterion among them that removes group-level disparity. Empirical studies on the social welfare and disparity of these policies are conducted.",
"title": ""
}
] |
scidocsrr
|
b8cdef2802dd6b6f8b0df66ed2b802cc
|
Analysis in Amazon Reviews Using Probabilistic Machine Learning
|
[
{
"docid": "75fd1706bb96a1888dc9939dbe5359c2",
"text": "In this paper, we present a novel approach to ide ntify feature specific expressions of opinion in product reviews with different features and mixed emotions . The objective is realized by identifying a set of potential features in the review and extract ing opinion expressions about those features by exploiting their associatio ns. Capitalizing on the view that more closely associated words come togeth er to express an opinion about a certain feature, dependency parsing i used to identify relations between the opinion expressions. The syst em learns the set of significant relations to be used by dependency parsing and a threshold parameter which allows us to merge closely associated opinio n expressions. The data requirement is minimal as thi is a one time learning of the domain independent parameters . The associations are represented in the form of a graph which is partiti oned to finally retrieve the opinion expression describing the user specified feature. We show that the system achieves a high accuracy across all domains and performs at par with state-of-the-art systems despi t its data limitations.",
"title": ""
}
] |
[
{
"docid": "aace50c8446403a9f72b24bce1e88c30",
"text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.",
"title": ""
},
{
"docid": "2c082899529d87a00d60fda198046be1",
"text": "David Chaum has introduced the idea of blind signamres, an extension of the concept of digital signatures, as a way to protect the identity and privacy of a user in electronic payment and service networks. Blind signatures also prevent so-called 'dossier creation \" about users by organizations. While the concept of blind signatures still allows authorities to distinguish between valid and false data, it prevents these authoriries from connecting specific data or actions to specific users. With the growing emphasis on the protection of the privacy of user data and user actions in electronic systems, blind signatures seem to be a perfect solution. This paper however, discusses a problemaric aspect of blind signatures, showing that this perfect solution can potentially lead to perfect crime. We use a real crime case as an example.",
"title": ""
},
{
"docid": "81672984e2d94d7a06ffe930136647a3",
"text": "Social network sites provide the opportunity for bu ilding and maintaining online social network groups around a specific interest. Despite the increasing use of social networks in higher education, little previous research has studied their impacts on stud en ’s engagement and on their perceived educational outcomes. This research investigates the impact of instructors’ self-disclosure and use of humor via course-based social networks as well as their credi bility, and the moderating impact of time spent in hese course-based social networks, on the students’ enga g ment in course-based social networks. The researc h provides a theoretical viewpoint, supported by empi rical evidence, on the impact of students’ engageme nt in course-based social networks on their perceived educational outcomes. The findings suggest that instructors who create course-based online social n etworks to communicate with their students can increase their engagement, motivation, and satisfac on. We conclude the paper by suggesting the theoretical implications for the study and by provi ding strategies for instructors to adjust their act ivities in order to succeed in improving their students’ engag ement and educational outcomes.",
"title": ""
},
{
"docid": "9902a306ff4c633f30f6d9e56aa8335c",
"text": "The bank director was pretty upset noticing Joe, the system administrator, spending his spare time playing Mastermind, an old useless game of the 70ies. He had fought the instinct of telling him how to better spend his life, just limiting to look at him in disgust long enough to be certain to be noticed. No wonder when the next day the director fell on his chair astonished while reading, on the newspaper, about a huge digital fraud on the ATMs of his bank, with millions of Euros stolen by a team of hackers all around the world. The article mentioned how the hackers had ‘played with the bank computers just like playing Mastermind’, being able to disclose thousands of user PINs during the one-hour lunch break. That precise moment, a second before falling senseless, he understood the subtle smile on Joe’s face the day before, while training at his preferred game, Mastermind.",
"title": ""
},
{
"docid": "b7177265a8e82e4357fdb8eeb3cbab12",
"text": "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a \"siamese\" deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by a cosine layer. Each sub network includes two convolutional layers and a full connected layer. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Experiments on VIPeR illustrate the superior performance of our method and a cross database experiment also shows its good generalization.",
"title": ""
},
{
"docid": "3e1165f031ac1337e79bd5c4eb1ad790",
"text": "Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI).",
"title": ""
},
{
"docid": "34993e22f91f3d5b31fe0423668a7eb1",
"text": "K-means as a clustering algorithm has been studied in intrusion detection. However, with the deficiency of global search ability it is not satisfactory. Particle swarm optimization (PSO) is one of the evolutionary computation techniques based on swarm intelligence, which has high global search ability. So K-means algorithm based on PSO (PSO-KM) is proposed in this paper. Experiment over network connection records from KDD CUP 1999 data set was implemented to evaluate the proposed method. A Bayesian classifier was trained to select some fields in the data set. The experimental results clearly showed the outstanding performance of the proposed method",
"title": ""
},
{
"docid": "4af5aa24efc82a8e66deb98f224cd033",
"text": "Abstract—In the recent years, the rapid spread of mobile device has create the vast amount of mobile data. However, some shallow-structure models such as support vector machine (SVM) have difficulty dealing with high dimensional data with the development of mobile network. In this paper, we analyze mobile data to predict human trajectories in order to understand human mobility pattern via a deep-structure model called “DeepSpace”. To the best of out knowledge, it is the first time that the deep learning approach is applied to predicting human trajectories. Furthermore, we develop the vanilla convolutional neural network (CNN) to be an online learning system, which can deal with the continuous mobile data stream. In general, “DeepSpace” consists of two different prediction models corresponding to different scales in space (the coarse prediction model and fine prediction models). This two models constitute a hierarchical structure, which enable the whole architecture to be run in parallel. Finally, we test our model based on the data usage detail records (UDRs) from the mobile cellular network in a city of southeastern China, instead of the call detail records (CDRs) which are widely used by others as usual. The experiment results show that “DeepSpace” is promising in human trajectories prediction.",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "6a08787a6f87d79d5ebca20569706c59",
"text": "Recently published methods enable training of bitwise neural networks which allow reduced representation of down to a single bit per weight. We present a method that exploits ensemble decisions based on multiple stochastically sampled network models to increase performance figures of bitwise neural networks in terms of classification accuracy at inference. Our experiments with the CIFAR-10 and GTSRB datasets show that the performance of such network ensembles surpasses the performance of the high-precision base model. With this technique we achieve 5.81% best classification error on CIFAR-10 test set using bitwise networks. Concerning inference on embedded systems we evaluate these bitwise networks using a hardware efficient stochastic rounding procedure. Our work contributes to efficient embedded bitwise neural networks.",
"title": ""
},
{
"docid": "48d5952fa77f40b7b6a9dbb9f2a62b33",
"text": "BACKGROUND\nPhysical activity has long been considered as an important component of a healthy lifestyle. Although many efforts have been made to promote physical activity, there is no effective global intervention for physical activity promotion. Some researchers have suggested that Pokémon GO, a location-based augmented reality game, was associated with a short-term increase in players' physical activity on a global scale, but the details are far from clear.\n\n\nOBJECTIVE\nThe objective of our study was to study the relationship between Pokémon GO use and players' physical activity and how the relationship varies across players with different physical activity levels.\n\n\nMETHODS\nWe conducted a field study in Hong Kong to investigate if Pokémon GO use was associated with physical activity. Pokémon GO players were asked to report their demographics through a survey; data on their Pokémon GO behaviors and daily walking and running distances were collected from their mobile phones. Participants (n=210) were Hong Kong residents, aged 13 to 65 years, who played Pokémon GO using iPhone 5 or 6 series in 5 selected types of built environment. We measured the participants' average daily walking and running distances over a period of 35 days, from 14 days before to 21 days after game installation. Multilevel modeling was used to identify and examine the predictors (including Pokémon GO behaviors, weather, demographics, and built environment) of the relationship between Pokémon GO use and daily walking and running distances.\n\n\nRESULTS\nThe average daily walking and running distances increased by 18.1% (0.96 km, approximately 1200 steps) in the 21 days after the participants installed Pokémon GO compared with the average distances over the 14 days before installation (P<.001). However, this association attenuated over time and was estimated to disappear 24 days after game installation. Multilevel models indicated that Pokémon GO had a stronger and more lasting association among the less physically active players compared with the physically active ones (P<.001). Playing Pokémon GO in green space had a significant positive relationship with daily walking and running distances (P=.03). Moreover, our results showed that whether Pokémon GO was played, the number of days played, weather (total rainfall, bright sunshine, mean air temperature, and mean wind speed), and demographics (age, gender, income, education, and body mass index) were associated with daily walking and running distances.\n\n\nCONCLUSIONS\nPokémon GO was associated with a short-term increase in the players' daily walking and running distances; this association was especially strong among less physically active participants. Pokémon GO can build new links between humans and green space and encourage people to engage in physical activity. Our results show that location-based augmented reality games, such as Pokémon GO, have the potential to be a global public health intervention tool.",
"title": ""
},
{
"docid": "3727ee51255d85a9260e1e92cc5b7ca7",
"text": "Electing a leader is a classical problem in distributed computing system. Synchronization between processes often requires one process acting as a coordinator. If an elected leader node fails, the other nodes of the system need to elect another leader without much wasting of time. The bully algorithm is a classical approach for electing a leader in a synchronous distributed computing system, which is used to determine the process with highest priority number as the coordinator. In this paper, we have discussed the limitations of Bully algorithm and proposed a simple and efficient method for the Bully algorithm which reduces the number of messages during the election. Our analytical simulation shows that, our proposed algorithm is more efficient than the Bully algorithm with fewer messages passing and fewer stages.",
"title": ""
},
{
"docid": "b86ab15486581bbf8056e4f1d30eb4e5",
"text": "Existing peer-to-peer publish-subscribe systems rely on structured-overlays and rendezvous nodes to store and relay group membership information. While conceptually simple, this design incurs the significant cost of creating and maintaining rigid-structures and introduces hotspots in the system at nodes that are neither publishers nor subscribers. In this paper, we introduce Quasar, a rendezvous-less probabilistic publish-subscribe system that caters to the specific needs of social networks. It is designed to handle social networks of many groups; on the order of the number of users in the system. It creates a routing infrastructure based on the proactive dissemination of highly aggregated routing vectors to provide anycast-like directed walks in the overlay. This primitive, when coupled with a novel mechanism for dynamically negating routes, enables scalable and efficient group-multicast that obviates the need for structure and rendezvous nodes. We examine the feasibility of this approach and show in a large-scale simulation that the system is scalable and efficient.",
"title": ""
},
{
"docid": "03c03dcdc15028417e699649291a2317",
"text": "The unique characteristics of origami to realize 3-D shape from 2-D patterns have been fascinating many researchers and engineers. This paper presents a fabrication of origami patterned fabric wheels that can deform and change the radius of the wheels. PVC segments are enclosed in the fabrics to build a tough and foldable structure. A special cable driven mechanism was designed to allow the wheels to deform while rotating. A mobile robot with two origami wheels has been built and tested to show that it can deform its wheels to overcome various obstacles.",
"title": ""
},
{
"docid": "068d87d2f1e24fdbe8896e0ab92c2934",
"text": "This paper presents a primary color optical pixel sensor circuit that utilizes hydrogenated amorphous silicon thin-film transistors (TFTs). To minimize the effect of ambient light on the sensing result of optical sensor circuit, the proposed sensor circuit combines photo TFTs with color filters to sense a primary color optical input signal. A readout circuit, which also uses thin-film transistors, is integrated into the sensor circuit for sampling the stored charges in the pixel sensor circuit. Measurements demonstrate that the signal-to-noise ratio of the proposed sensor circuit is unaffected by ambient light under illumination up to 12 000 lux by white LEDs. Thus, the proposed optical pixel sensor circuit is suitable for receiving primary color optical input signals in large TFT-LCD panels.",
"title": ""
},
{
"docid": "aa50aeb6c1c4b52ff677a313d49fd8df",
"text": "Monocular depth estimation, which plays a key role in understanding 3D scene geometry, is fundamentally an illposed problem. Existing methods based on deep convolutional neural networks (DCNNs) have examined this problem by learning convolutional networks to estimate continuous depth maps from monocular images. However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions. In this paper, we hypothesize that achieving a compromise between spatial and depth resolutions can improve network training. Based on this “compromise principle”, we propose a regression-classification cascaded network (RCCN), which consists of a regression branch predicting a low spatial resolution continuous depth map and a classification branch predicting a high spatial resolution discrete depth map. The two branches form a cascaded structure allowing the main classification branch to benefit from the auxiliary regression branch. By leveraging large-scale raw training datasets and some data augmentation strategies, our network achieves competitive or state-of-the-art results on three challenging benchmarks, including NYU Depth V2 [1], KITTI [2], and Make3D [3].",
"title": ""
},
{
"docid": "2c2e57a330157cf28e4d6d6466132432",
"text": "This paper presents an automatic method to track soccer players in soccer video recorded from a single camera where the occurrence of pan-tilt-zoom can take place. The automatic object tracking is intended to support texture extraction in a free viewpoint video authoring application for soccer video. To ensure that the identity of the tracked object can be correctly obtained, background segmentation is performed and automatically removes commercial billboards whenever it overlaps with the soccer player. Next, object tracking is performed by an attribute matching algorithm for all objects in the temporal domain to find and maintain the correlation of the detected objects. The attribute matching process finds the best match between two objects in different frames according to their pre-determined attributes: position, size, dominant color and motion information. Utilizing these attributes, the experimental results show that the tracking process can handle occlusion problems such as occlusion involving more than three objects and occluded objects with similar color and moving direction, as well as correctly identify objects in the presence of camera movements. key words: free viewpoint, attribute matching, automatic object tracking, soccer video",
"title": ""
},
{
"docid": "e9676faf7e8d03c64fdcf6aa5e09b008",
"text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.",
"title": ""
},
{
"docid": "31f838fb0c7db7e8b58fb1788d5554c8",
"text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.",
"title": ""
},
{
"docid": "1b414326bda21ce7a3f36d208a8d80bb",
"text": "Shapelets are discriminative patterns in time series, that best predict the target variable when their distances to the respective time series are used as features for a classifier. Since the shapelet is simply any time series of some length less than or equal to the length of the shortest time series in our data set, there is an enormous amount of possible shapelets present in the data. Initially, shapelets were found by extracting numerous candidates and evaluating them for their prediction quality. Then, Grabocka et al. [2] proposed a novel approach of learning time series shapelets called LTS. A new mathematical formalization of the task via a classification objective function was proposed and a tailored stochastic gradient learning was applied. It enabled learning near-to-optimal shapelets without the overhead of trying out lots of candidates. The Euclidean distance measure was used as distance metric in the proposed approach. As a limitation, it is not able to learn a single shapelet, that can be representative of different subsequences of time series, which are just warped along time axis. To consider these cases, we propose to use Dynamic Time Warping (DTW) as a distance measure in the framework of LTS. The proposed approach was evaluated on 11 real world data sets from the UCR repository and a synthetic data set created by ourselves. The experimental results show that the proposed approach outperforms the existing methods on these data sets.",
"title": ""
}
] |
scidocsrr
|
87eb3cf6b7877ab42a73739b6e98ef32
|
Anatomical Landmark Detection in Medical Applications Driven by Synthetic Data
|
[
{
"docid": "1f5708382f0c4f70f500253554a8b3cb",
"text": "The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors.",
"title": ""
}
] |
[
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "cbdace4636017f925b89ecf266fde019",
"text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.",
"title": ""
},
{
"docid": "7f2857c1bd23c7114d58c290f21bf7bd",
"text": "Many contemporary organizations are placing a greater emphasis on their performance management systems as a means of generating higher levels of job performance. We suggest that producing performance increments may be best achieved by orienting the performance management system to promote employee engagement. To this end, we describe a new approach to the performance management process that includes employee engagement and the key drivers of employee engagement at each stage. We present a model of engagement management that incorporates the main ideas of the paper and suggests a new perspective for thinking about how to foster and manage employee engagement to achieve high levels of job",
"title": ""
},
{
"docid": "65dd0e6e143624c644043507cf9465a7",
"text": "Let G \" be a non-directed graph having n vertices, without parallel edges and slings. Let the vertices of Gn be denoted by F 1 ,. . ., Pn. Let v(P j) denote the valency of the point P i and put (0. 1) V(G,) = max v(Pj). 1ninn Let E(G.) denote the number of edges of Gn. Let H d (n, k) denote the set of all graphs Gn for which V (G n) = k and the diameter D (Gn) of which is-d, In the present paper we shall investigate the quantity (0 .2) Thus we want to determine the minimal number N such that there exists a graph having n vertices, N edges and diameter-d and the maximum of the valencies of the vertices of the graph is equal to k. To help the understanding of the problem let us consider the following interpretation. Let be given in a country n airports ; suppose we want to plan a network of direct flights between these airports so that the maximal number of airports to which a given airport can be connected by a direct flight should be equal to k (i .e. the maximum of the capacities of the airports is prescribed), further it should be possible to fly from every airport to any other by changing the plane at most d-1 times ; what is the minimal number of flights by which such a plan can be realized? For instance, if n = 7, k = 3, d= 2 we have F2 (7, 3) = 9 and the extremal graph is shown by Fig. 1. The problem of determining Fd (n, k) has been proposed and discussed recently by two of the authors (see [1]). In § 1 we give a short summary of the results of the paper [1], while in § 2 and 3 we give some new results which go beyond those of [1]. Incidentally we solve a long-standing problem about the maximal number of edges of a graph not containing a cycle of length 4. In § 4 we mention some unsolved problems. Let us mention that our problem can be formulated also in terms of 0-1 matrices as follows : Let M=(a il) be a symmetrical n by n zero-one matrix such 2",
"title": ""
},
{
"docid": "c4b5c4c94faa6e77486a95457cdf502f",
"text": "In this paper, we implement an optical fiber communication system as an end-to-end deep neural network, including the complete chain of transmitter, channel model, and receiver. This approach enables the optimization of the transceiver in a single end-to-end process. We illustrate the benefits of this method by applying it to intensity modulation/direct detection (IM/DD) systems and show that we can achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold. We model all componentry of the transmitter and receiver, as well as the fiber channel, and apply deep learning to find transmitter and receiver configurations minimizing the symbol error rate. We propose and verify in simulations a training method that yields robust and flexible transceivers that allow—without reconfiguration—reliable transmission over a large range of link dispersions. The results from end-to-end deep learning are successfully verified for the first time in an experiment. In particular, we achieve information rates of 42 Gb/s below the HD-FEC threshold at distances beyond 40 km. We find that our results outperform conventional IM/DD solutions based on two- and four-level pulse amplitude modulation with feedforward equalization at the receiver. Our study is the first step toward end-to-end deep learning based optimization of optical fiber communication systems.",
"title": ""
},
{
"docid": "6cc6267ef9386f70b8cd197ae02152ad",
"text": "This paper will cover the conceptual design of a Mars Ascent Vehicle (MAV) and efforts underway to raise the TRL at both the component and system levels. A system down select was executed resulting in a Hybrid Propulsion based Single Stage To Orbit (SSTO) MAV baseline architecture. This paper covers the Point of Departure design, as well as results of hardware developments that will be tested in several upcoming flight opportunities.",
"title": ""
},
{
"docid": "66ce4b486893e17e031a96dca9022ade",
"text": "Product reviews possess critical information regarding customers’ concerns and their experience with the product. Such information is considered essential to firms’ business intelligence which can be utilized for the purpose of conceptual design, personalization, product recommendation, better customer understanding, and finally attract more loyal customers. Previous studies of deriving useful information from customer reviews focused mainly on numerical and categorical data. Textual data have been somewhat ignored although they are deemed valuable. Existing methods of opinion mining in processing customer reviews concentrates on counting positive and negative comments of review writers, which is not enough to cover all important topics and concerns across different review articles. Instead, we propose an automatic summarization approach based on the analysis of review articles’ internal topic structure to assemble customer concerns. Different from the existing summarization approaches centered on sentence ranking and clustering, our approach discovers and extracts salient topics from a set of online reviews and further ranks these topics. The final summary is then generated based on the ranked topics. The experimental study and evaluation show that the proposed approach outperforms the peer approaches, i.e. opinion mining and clustering-summarization, in terms of users’ responsiveness and its ability to discover the most important topics. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dcff0e9e62d245212554f639d5b152bf",
"text": "The pull-based development model, enabled by git and popularised by collaborative coding platforms like BitBucket, Gitorius, and GitHub, is widely used in distributed software teams. While this model lowers the barrier to entry for potential contributors (since anyone can submit pull requests to any repository), it also increases the burden on integrators (i.e., members of a project's core team, responsible for evaluating the proposed changes and integrating them into the main development line), who struggle to keep up with the volume of incoming pull requests. In this paper we report on a quantitative study that tries to resolve which factors affect pull request evaluation latency in GitHub. Using regression modeling on data extracted from a sample of GitHub projects using the Travis-CI continuous integration service, we find that latency is a complex issue, requiring many independent variables to explain adequately.",
"title": ""
},
{
"docid": "3d84f5f8322737bf8c6f440180e07660",
"text": "Incremental Dialog Processing (IDP) enables Spoken Dialog Systems to gradually process minimal units of user speech in order to give the user an early system response. In this paper, we present an application of IDP that shows its effectiveness in a task-oriented dialog system. We have implemented an IDP strategy and deployed it for one month on a real-user system. We compared the resulting dialogs with dialogs produced over the previous month without IDP. Results show that the incremental strategy significantly improved system performance by eliminating long and often off-task utterances that generally produce poor speech recognition results. User behavior is also affected; the user tends to shorten utterances after being interrupted by the system.",
"title": ""
},
{
"docid": "428b6cdbf1d5388482ab34be385004aa",
"text": "Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code’s known sematics. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VARNAMING, in which a network attempts to predict the name of a variable given its usage, and VARMISUSE, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VARMISUSE task in many cases. Additionally, our testing showed that VARMISUSE identifies a number of bugs in mature open-source projects.",
"title": ""
},
{
"docid": "b5e603ef5cae02919f7574d07347db38",
"text": "In this paper, we propose a novel approach for traffic accident anticipation through (i) Adaptive Loss for Early Anticipation (AdaLEA) and (ii) a large-scale self-annotated incident database for anticipation. The proposed AdaLEA allows a model to gradually learn an earlier anticipation as training progresses. The loss function adaptively assigns penalty weights depending on how early the model can anticipate a traffic accident at each epoch. Additionally, we construct a Near-miss Incident DataBase for anticipation. This database contains an enormous number of traffic near-miss incident videos and annotations for detail evaluation of two tasks, risk anticipation and risk-factor anticipation. In our experimental results, we found our proposal achieved the highest scores for risk anticipation (+6.6% better on mean average precision (mAP) and 2.36 sec earlier than previous work on the average time-to-collision (ATTC)) and risk-factor anticipation (+4.3% better on mAP and 0.70 sec earlier than previous work on ATTC).",
"title": ""
},
{
"docid": "cf2581c3b09a1dee4662136872f2b3fa",
"text": "Memristor had been ̄rst theorized nearly 40 years ago by Prof. Chua, as the fourth fundamental circuit element beside the three existing elements (Resistor, Capacitor and Inductor) but because no one has succeeded in building a memristor, it has long remained a theoretical element. Some months ago, Hewlett-Packard (hp) announced it created a memristor using a TiO2=TiO2 X structure. In this paper, the characteristics, structures and relations for the invented hp's memristor are brie°y reviewed and then two general SPICE models for the charge-controlled and °ux-controlled memristors are introduced for the ̄rst time. By adjusting the model parameters to the hp's memristor characteristics some circuit properties of the device are studied and then two important memristor applications as the memory cell in a nonvolatileRAM structure and as the synapse in an arti ̄cial neural network are studied. By utilizing the introduced models and designing the appropriate circuits for two most important applications; a nonvolatile memory structure and a programmable logic gate, circuit simulations are done and the results are presented.",
"title": ""
},
{
"docid": "9d1d02358e32a5c40c35e573f63d5366",
"text": "Ensuring communications security in Wireless Sensor Networks (WSNs) indeed is critical; due to the criticality of the resources in the sensor nodes as well as due to their ubiquitous and pervasive deployment, with varying attributes and degrees of security required. The proliferation of the next generation sensor nodes, has not solved this problem, because of the greater emphasis on low-cost deployment. In addition, the WSNs use data-centric multi-hop communication that in turn, necessitates the security support to be devised at the link layer (increasing the cost of security related operations), instead of being at the application layer, as in general networks. Therefore, an energy-efficient link layer security framework is necessitated. There do exists a number of link layer security architectures that offer some combinations of the security attributes desired by different WSN applications. However, as we show in this paper, none of them is responsive to the actual security demands of the applications. Therefore, we believe that there is a need for investigating the feasibility of a configurable software-based link layer security architecture wherein an application can be compiled flexibly, with respect to its actual security demands. In this paper, we analyze, propose and experiment with the basic design of such configurable link layer security architecture for WSNs. We also experimentally evaluate various aspects related to our scheme viz. configurable block ciphers, configurable block cipher modes of operations, configurable MAC sizes and configurable replay protection. The architecture proposed is aimed to offer the optimal level of security at the minimal overhead, thus saving the precious resources in the WSNs.",
"title": ""
},
{
"docid": "00f4af13461c5f6d15d6883afc50c1d1",
"text": "In order to solve the problem that the long cycle and the repetitive work in the process of designing the industrial robot, a modular manipulator system developed for general industrial applications is introduced in this paper. When the application scene is changed, the corresponding robotic modules can be selected to assemble a new robot configuration that meets the requirements. The modules can be divided into two categories: joint modules and link modules. Joint modules consist of three types of revolute joint modules with different torque, and link modules mainly contain T link module and L link module. By connection of different types of modules, various of configurations can be achieved. Considering the traditional 6-DoF manipulators are difficult to meet the needs of the unstructured industrial applications, a 7-DoF redundant manipulator prototype is designed on the basis of the robotic modules.",
"title": ""
},
{
"docid": "7f1f2de5efadcd46d423257e9c21f3bb",
"text": "Physical layer security is an emerging technique to improve the wireless communication security, which is wide ly regarded as a complement to cryptographic technologies. To design physical layer security techniques under practical scenarios, the uncertainty and imperfections in the channel knowl edge need to be taken into consideration. This paper provides a survey of recent research and development in physical layer security considering the imperfect channel state informat ion (CSI) at communication nodes. We first present an overview of the main information-theoretic measures of the secrecy p erformance with imperfect CSI. Then, we describe several sign al processing enhancements in secure transmission designs, s uch as secure on-off transmission, beamforming with artificialnoise, and secure communication assisted by relay nodes or in cogni tive radio systems. The recent studies of physical layer securit y in large-scale decentralized wireless networks are also summ arized. Finally, the open problems for the on-going and future resea rch are discussed.",
"title": ""
},
{
"docid": "458470e18ce2ab134841f76440cfdc2b",
"text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",
"title": ""
},
{
"docid": "b9ea38ab2c6c68af37a46d92c8501b68",
"text": "In this paper we introduce a gamification model for encouraging sustainable multi-modal urban travel in modern European cities. Our aim is to provide a mechanism that encourages users to reflect on their current travel behaviours and to engage in more environmentally friendly activities that lead to the formation of sustainable, long-term travel behaviours. To achieve this our users track their own behaviours, set goals, manage their progress towards those goals, and respond to challenges. Our approach uses a point accumulation and level achievement metaphor to abstract from the underlying specifics of individual behaviours and goals to allow an Simon Wells University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: simon.wells@abdn.ac.uk Henri Kotkanen University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: henri.kotkanen@helsinki.fi Michael Schlafli University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: michael.schlafli@abdn.ac.uk Silvia Gabrielli CREATE-NET Via alla Cascata 56/D Povo 38123 Trento Italy e-mail: silvia.gabrielli@createnet.org Judith Masthoff University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: j.masthoff@abdn.ac.uk Antti Jylhä University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: antti.jylha@cs.helsinki.fi Paula Forbes University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: paula.forbes@abdn.ac.uk",
"title": ""
},
{
"docid": "3379acb763f587851e2218fca8084117",
"text": "Qualitative research includes a variety of methodological approacheswith different disciplinary origins and tools. This article discusses three commonly used approaches: grounded theory, mixed methods, and action research. It provides background for those who will encounter these methodologies in their reading rather than instructions for carrying out such research. We describe the appropriate uses, key characteristics, and features of rigour of each approach.",
"title": ""
},
{
"docid": "7564ec31bb4e81cc6f8bd9b2b262f5ca",
"text": "Traditional methods to calculate CRC suffer from diminishing returns. Doubling the data width doesn't double the maximum data throughput, the worst case timing path becomes slower. Feedback in the traditional implementation makes pipelining problematic. However, the on chip data width used for high throughput protocols is constantly increasing. The battle of reducing static power consumption is one factor driving this trend towards wider data paths. This paper discusses a method for pipelining the calculation of CRC's, such as ISO-3309 CRC32. This method allows independent scaling of circuit frequency and data throughput by varying the data width and the number of pipeline stages. Pipeline latency can be traded for area while slightly affecting timing. Additionally it allows calculation over data that isn't the full width of the input. This often happens at the end of the packet, although it could happen in the middle of the packet if data arrival is bursty. Finally, a fortunate side effect is that it offers the ability to efficiently update a known good CRC value where a small subset of data in the packet has changed. This is a function often desired in routers, for example updating the TTL field in IPv4 packets.",
"title": ""
},
{
"docid": "dd5fa68b788cc0816c4e16f763711560",
"text": "Over the last ten years the basic knowledge of brain structure and function has vastly expanded, and its incorporation into the developmental sciences is now allowing for more complex and heuristic models of human infancy. In a continuation of this effort, in this two-part work I integrate current interdisciplinary data from attachment studies on dyadic affective communications, neuroscience on the early developing right brain, psychophysiology on stress systems, and psychiatry on psychopathogenesis to provide a deeper understanding of the psychoneurobiological mechanisms that underlie infant mental health. In this article I detail the neurobiology of a secure attachment, an exemplar of adaptive infant mental health, and focus upon the primary caregiver’s psychobiological regulation of the infant’s maturing limbic system, the brain areas specialized for adapting to a rapidly changing environment. The infant’s early developing right hemisphere has deep connections into the limbic and autonomic nervous systems and is dominant for the human stress response, and in this manner the attachment relationship facilitates the expansion of the child’s coping capcities. This model suggests that adaptive infant mental health can be fundamentally defined as the earliest expression of flexible strategies for coping with the novelty and stress that is inherent in human interactions. This efficient right brain function is a resilience factor for optimal development over the later stages of the life cycle. RESUMEN: En los últimos diez an ̃os el conocimiento ba ́sico de la estructura y funcio ́n del cerebro se ha expandido considerablemente, y su incorporacio ́n mo parte de las ciencias del desarrollo permite ahora tener modelos de infancia humana ma ́s complejos y heurı ́sticos. Como una continuacio ́n a este esfuerzo, en este ensayo que contiene dos partes, se integra la actual informacio ́n interdisciplinaria que proviene de los estudios de la unio ́n afectiva en relacio ́n con comunicaciones afectivas en forma de dı ́adas, la neurociencia en el desarrollo inicial del lado derecho del cerebro, la sicofisiologı ́a de los sistemas de tensión emocional, ası ́ como la siquiatrı ́a en cuanto a la sicopatoge ́nesis, con el fin de presentar un conocimiento ma ́s profundo de los mecanismos siconeurobiolo ́gic s que sirven de base para la salud mental infantil. En este ensayo se explica con detalle la neurobiologı ́a de una relacio ́n afectiva segura, un modelo de salud mental infantil que se puede adaptar, y el enfoque del mismo se centra en la reglamentacio ́n sicobiológica que quien primariamente cuida del nin ̃o tiene del maduramiento del sistema lı́mbico del infante, o sea, las a ́reas del cerebro especialmente dedicadas a la adaptacio ́n un medio Direct correspondence to: Allan N. Schore, Department of Psychiatry and Biobehavioral Sciences, UCLA School of Medicine, 9817 Sylvia Avenue, Northridge, CA 91324; fax: (818) 349-4404; e-mail: anschore@aol.com. 8 ● A.N. Schore IMHJ (Wiley) LEFT INTERACTIVE",
"title": ""
}
] |
scidocsrr
|
dfd924210ac1736adbdda32fe4c90194
|
On the Convergence of Learning-based Iterative Methods for Nonconvex Inverse Problems
|
[
{
"docid": "e2a9bb49fd88071631986874ea197bc1",
"text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"title": ""
},
{
"docid": "d2b7e61ecedf80f613d25c4f509ddaf6",
"text": "We present a new image editing method, particularly effective for sharpening major edges by increasing the steepness of transition while eliminating a manageable degree of low-amplitude structures. The seemingly contradictive effect is achieved in an optimization framework making use of L0 gradient minimization, which can globally control how many non-zero gradients are resulted in to approximate prominent structure in a sparsity-control manner. Unlike other edge-preserving smoothing approaches, our method does not depend on local features, but instead globally locates important edges. It, as a fundamental tool, finds many applications and is particularly beneficial to edge extraction, clip-art JPEG artifact removal, and non-photorealistic effect generation.",
"title": ""
}
] |
[
{
"docid": "1f711861fbe29d695ace76270b3eb862",
"text": "The aim of this study is to collate all available data on antidiabetic plants that inhibit alpha glucosidase, reported mainly by Medline (PubMed) these last years. In the present study, interest is focused on experimental researches conducted on hypoglycemic plants particularly those which show alpha glucosidase inhibitor activity alongside bioactive components. This study describes 47 species that belong to 29 families. The plant families, which enclose the species, studied most as inhibitors of alphaglucosidase, are Fabaceae (6 species.), Crassulaceae (3 species), Hippocrateacaea (3 species), Lamiaceae (3 species), and Myrtaceae (3 species), with most studied species being Salacia reticulata (Hippocrateaceae) and Morus alba (Moraceae). The study also covers natural products (active natural components and crude extracts) isolated from the medicinal plants which inhibit alpha glucosidase as reported this last decade. Many kinds of these isolated natural products show strong activity such as, Alkaloids, stilbenoids (polyphenol), triterpene, acids (chlorogenic acid, betulinic acid, syringic acid, vanillic acid, bartogenic acid, oleanolic acid, dehydrotrametenolic acid, corosolic acid, ellagic acid, ursolic acid, gallic acid), phytosterol, myoinositol, flavonoids, Flavonolignans, anthraquinones, anthrones, and xanthones, Feruloylglucosides, flavanone glucosides, acetophenone glucosides, glucopyranoside derivatives, genine derivatives, flavonol, anthocyanin and others.",
"title": ""
},
{
"docid": "fe397e4124ef517268aaabd999bc02c4",
"text": "A new frequency-reconfigurable quasi-Yagi dipole antenna is presented. It consists of a driven dipole element with two varactors in two arms, a director with an additional varactor, a truncated ground plane reflector, a microstrip-to-coplanar-stripline (CPS) transition, and a novel biasing circuit. The effective electrical length of the director element and that of the driven arms are adjusted together by changing the biasing voltages. A 35% continuously frequency-tuning bandwidth, from 1.80 to 2.45 GHz, is achieved. This covers a number of wireless communication systems, including 3G UMTS, US WCS, and WLAN. The length-adjustable director allows the endfire pattern with relatively high gain to be maintained over the entire tuning bandwidth. Measured results show that the gain varies from 5.6 to 7.6 dBi and the front-to-back ratio is better than 10 dB. The H-plane cross polarization is below -15 dB, and that in the E-plane is below -20 dB.",
"title": ""
},
{
"docid": "4a9b4668296561b3522c3c57c64220c1",
"text": "Hyperspectral imagery, which contains hundreds of spectral bands, has the potential to better describe the biological and chemical attributes on the plants than multispectral imagery and has been evaluated in this paper for the purpose of crop yield estimation. The spectrum of each pixel in a hyperspectral image is considered as a linear combinations of the spectra of the vegetation and the bare soil. Recently developed linear unmixing approaches are evaluated in this paper, which automatically extracts the spectra of the vegetation and bare soil from the images. The vegetation abundances are then computed based on the extracted spectra. In order to reduce the influences of this uncertainty and obtain a robust estimation results, the vegetation abundances extracted on two different dates on the same fields are then combined. The experiments are carried on the multidate hyperspectral images taken from two grain sorghum fields. The results show that the correlation coefficients between the vegetation abundances obtained by unsupervised linear unmixing approaches are as good as the results obtained by supervised methods, where the spectra of the vegetation and bare soil are measured in the laboratory. In addition, the combination of vegetation abundances extracted on different dates can improve the correlations (from 0.6 to 0.7).",
"title": ""
},
{
"docid": "bb6ec993e0d573f4307a37588d6732ae",
"text": "Beaudry and Pinsonneault (2005) IT related coping behaviors System users choose different adaptation strategies based on a combination of primary appraisal (i.e., a user’s assessment of the expected consequences of an IT event) and secondary appraisal (i.e., a user’s assessment of his/her control over the situation). Users will perform different actions in response to a combination of cognitive and behavioral efforts, both of which have been categorized as either problemor emotion-focused. Whole system",
"title": ""
},
{
"docid": "3ab1e2768c1f612f1f85ddb192b37e1f",
"text": "The vertical Cup-to-Disc Ratio (CDR) is an important indicator in the diagnosis of glaucoma. Automatic segmentation of the optic disc (OD) and optic cup is crucial towards a good computer-aided diagnosis (CAD) system. This paper presents a statistical model-based method for the segmentation of the optic disc and optic cup from digital color fundus images. The method combines knowledge-based Circular Hough Transform and a novel optimal channel selection for segmentation of the OD. Moreover, we extended the method to optic cup segmentation, which is a more challenging task. The system was tested on a dataset of 325 images. The average Dice coefficient for the disc and cup segmentation is 0.92 and 0.81 respectively, which improves significantly over existing methods. The proposed method has a mean absolute CDR error of 0.10, which outperforms existing methods. The results are promising and thus demonstrate a good potential for this method to be used in a mass screening CAD system.",
"title": ""
},
{
"docid": "cec46b9f4c340ee8ddb12f5a6e2e7a16",
"text": "The main theorem allows an elegant algorithm to be refined into an efficient one. The elegant algorithm for constructing a finite automaton from a regular expression is based on 'derivatives of' regular expressions; the efficient algorithm is based on 'marking of' regular expressions. Derivatives of regular expressions correspond to state transitions in finite automata. When a finite automaton makes a transition under input symbol a, a leading a is stripped from the remaining input. Correspondingly, if the input string is generated by a regular expression E, then the derivative of E by a generates the remaining input after a leading a is stripped. Brzozowski (1964) used derivatives to construct finite automata; the state for expression E has a transition under a to the state for the derivative of E by a. This approach extends to regular expressions with new operators, including intersection and complement; however, explicit computation of derivatives can be expensive. Marking of regular'expressions yields an expression with distinct input symbols. Following MeNaughton and Yamada (1960), we attach subscripts to each input symbol in an expression; (ab+b)*ba becomes (atb2+b3)*b4as. Conceptually, the efficient algorithm constructs an automaton for the marked expression. The marks on the transitions are then erased, resulting in a nondeterministic automaton for the original unmarked expression. This approach works for the usual operations of union, concatenation, and iteration; however, intersection and complement cannot be handled because marking and unmarking do not preserve the languages generated by regular expressions with these operators.",
"title": ""
},
{
"docid": "1f677c07ba42617ac590e6e0a5cdfeab",
"text": "Network Functions Virtualization (NFV) is an emerging initiative to overcome increasing operational and capital costs faced by network operators due to the need to physically locate network functions in specific hardware appliances. In NFV, standard IT virtualization evolves to consolidate network functions onto high volume servers, switches and storage that can be located anywhere in the network. Services are built by chaining a set of Virtual Network Functions (VNFs) deployed on commodity hardware. The implementation of NFV leads to the challenge: How several network services (VNF chains) are optimally orchestrated and allocated on the substrate network infrastructure? In this paper, we address this problem and propose CoordVNF, a heuristic method to coordinate the composition of VNF chains and their embedding into the substrate network. CoordVNF aims to minimize bandwidth utilization while computing results within reasonable runtime.",
"title": ""
},
{
"docid": "59791087d518577c20708e544a5eec26",
"text": "This paper proposes an innovative fraud detection method, built upon existing fraud detection research and Minority Report, to deal with the data mining problem of skewed data distributions. This method uses backpropagation (BP), together with naive Bayesian (NB) and C4.5 algorithms, on data partitions derived from minority oversampling with replacement. Its originality lies in the use of a single meta-classifier (stacking) to choose the best base classifiers, and then combine these base classifiers' predictions (bagging) to improve cost savings (stacking-bagging). Results from a publicly available automobile insurance fraud detection data set demonstrate that stacking-bagging performs slightly better than the best performing bagged algorithm, C4.5, and its best classifier, C4.5 (2), in terms of cost savings. Stacking-bagging also outperforms the common technique used in industry (BP without both sampling and partitioning). Subsequently, this paper compares the new fraud detection method (meta-learning approach) against C4.5 trained using undersampling, oversampling, and SMOTEing without partitioning (sampling approach). Results show that, given a fixed decision threshold and cost matrix, the partitioning and multiple algorithms approach achieves marginally higher cost savings than varying the entire training data set with different class distributions. The most interesting find is confirming that the combination of classifiers to produce the best cost savings has its contributions from all three algorithms.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "6ef3ccec0e07a35a10d9177dda3b0ad0",
"text": "This chapter presents an introduction to combinatorial optimisation in the context of the high-level modelling platform, Numberjack. The process of developing an effective model for a combinatorial problem is presented, along with details on how such problems can be solved using three of the most prominent solution paradigms.",
"title": ""
},
{
"docid": "351faf9d58bd2a2010766acff44dadbc",
"text": "صلاخلا ـ ة : ىلع قوفي ةيبرعلا ةغللاب نيثدحتملا ددع نأ نم مغرلا يتئام تنإ يف ةلوذبملا دوهجلا نأ لاإ ،صخش نويلم ةليلق ةيبوساحلا ةيبرعلا ةيوغللا رداصملا جا ادج ب ةيبوساحلا ةيبرعلا مجاعملا لاجم يف ةصاخ . بلغأ نإ ةيفاآ تسيل يهف اذلو ،ةيبنجأ تاغلل امنإ ،ةيبرعلا ةغلل لصلأا يف ممصت مل ةدوجوملا دوهجلا يبرعلا عمتجملا تاجايتحا دسل . فدهي حرتقم ضرع ىلإ ثحبلا اذه لأ جذومن ساح مجعم ةينقت ىلع ينبم يبو \" يجولوتنلأا \" اهيلع دمتعت يتلا ةيساسلأا تاينقتلا نم ةثيدح ةينقت يهو ، ةينقت \" ةيللادلا بيولا \" ام لاجم يف تاقلاعلاو ميهافملل يللادلا يفرعملا ليثمتلاب ىنعت ، . دقو ءانب مت لأا جذومن ةيرظن ساسأ ىلع \" ةيللادلا لوقحلا \" تايوغللا لاجم يف ةفورعملا ، و ت م اهساسأ ىلع ينب يتلا تانايبلا ءاقتسا لأا جذومن نم \" نامزلا ظافلأ \" يف \" ميركلا نآرقلا \" ، يذلا اهلامآو اهيقر يف ةيبرعلا هيلإ تلصو ام قدأ دعي . اذه لثم رفوت نإ لأا جذومن اعفان نوكيس ةيبرعلا ةغلل ةيبرعلا ةغللا لاجم يف ةيبوساحلا تاقيبطتلل . مت دقو م ضرع ثحبلا اذه يف ءانب ةيجهنمل لصف لأا جذومن اهيلإ لصوتلا مت يتلا جئاتنلاو .",
"title": ""
},
{
"docid": "3435041805c5cb2629d70ff909c10637",
"text": "Synchronized stochastic gradient descent (SGD) optimizers with data parallelism are widely used in training large-scale deep neural networks. Although using larger mini-batch sizes can improve the system scalability by reducing the communication-to-computation ratio, it may hurt the generalization ability of the models. To this end, we build a highly scalable deep learning training system for dense GPU clusters with three main contributions: (1) We propose a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy. (2) We propose an optimization approach for extremely large minibatch size (up to 64k) that can train CNN models on the ImageNet dataset without losing accuracy. (3) We propose highly optimized all-reduce algorithms that achieve up to 3x and 11x speedup on AlexNet and ResNet-50 respectively than NCCL-based training on a cluster with 1024 Tesla P40 GPUs. On training ResNet-50 with 90 epochs, the state-of-the-art GPU-based system with 1024 Tesla P100 GPUs spent 15 minutes and achieved 74.9% top-1 test accuracy, and another KNL-based system with 2048 Intel KNLs spent 20 minutes and achieved 75.4% accuracy. Our training system can achieve 75.8% top-1 test accuracy in only 6.6 minutes using 2048 Tesla P40 GPUs. When training AlexNet with 95 epochs, our system can achieve 58.7% top-1 test accuracy within 4 minutes, which also outperforms all other existing systems.",
"title": ""
},
{
"docid": "faed829d4fc252159a0ed5e7ff1eea07",
"text": "Modern cryptographic practice rests on the use of one-way functions, which are easy to evaluate but difficult to invert. Unfortunately, commonly used one-way functions are either based on unproven conjectures or have known vulnerabilities. We show that instead of relying on number theory, the mesoscopic physics of coherent transport through a disordered medium can be used to allocate and authenticate unique identifiers by physically reducing the medium's microstructure to a fixed-length string of binary digits. These physical one-way functions are inexpensive to fabricate, prohibitively difficult to duplicate, admit no compact mathematical representation, and are intrinsically tamper-resistant. We provide an authentication protocol based on the enormous address space that is a principal characteristic of physical one-way functions.",
"title": ""
},
{
"docid": "86429b47cefce29547ee5440a8410b83",
"text": "AIM\nThe purpose of the study was to observe the outcome of trans-fistula anorectoplasty (TFARP) in treating female neonates with anorectovestibular fistula (ARVF).\n\n\nMETHODS\nA prospective study was carried out on female neonates with vestibular fistula, admitted into the surgical department of a tertiary level children hospital during the period from January 2009 to June 2011. TFARP without a covering colostomy was performed for definitive correction in the neonatal period in all. Data regarding demographics, clinical presentation, associated anomalies, preoperative findings, preoperative preparations, operative technique, difficulties faced during surgery, duration of surgery, postoperative course including complications, hospital stay, bowel habits and continence was prospectively compiled and analyzed. Anorectal function was measured by the modified Wingspread scoring as, \"excellent\", \"good\", \"fair\" and \"poor\".\n\n\nRESULTS\nThirty-nine neonates with vestibular fistula underwent single stage TFARP. Mean operation time was 81 minutes and mean hospital stay was 6 days. Three (7.7%) patients suffered vaginal tear during separation from the rectal wall. Two patients (5.1%) developed wound infection at neoanal site that resulted in anal stenosis. Eight (20.51%) children in the series are more than 3 years of age and are continent; all have attained \"excellent\" fecal continence score. None had constipation or soiling. Other 31 (79.5%) children less than 3 years of age have satisfactory anocutaneous reflex and anal grip on per rectal digital examination, though occasional soiling was observed in 4 patients.\n\n\nCONCLUSION\nPrimary repair of ARVF in female neonates by TFARP without dividing the perineum is a feasible procedure with good cosmetic appearance and good anal continence. Separation of the rectum from the posterior wall of vagina is the most delicate step of the operation, takes place under direct vision. It is very important to keep the perineal body intact. With meticulous preoperative bowel preparation and post operative wound care and bowel management, single stage reconstruction is possible in neonates with satisfactory results.",
"title": ""
},
{
"docid": "45a45087a6829486d46eda0adcff978f",
"text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.",
"title": ""
},
{
"docid": "6090d8c6e8ef8532c5566908baa9a687",
"text": "Cardiovascular diseases (CVD) are known to be the most widespread causes to death. Therefore, detecting earlier signs of cardiac anomalies is of prominent importance to ease the treatment of any cardiac complication or take appropriate actions. Electrocardiogram (ECG) is used by doctors as an important diagnosis tool and in most cases, it's recorded and analyzed at hospital after the appearance of first symptoms or recorded by patients using a device named holter ECG and analyzed afterward by doctors. In fact, there is a lack of systems able to capture ECG and analyze it remotely before the onset of severe symptoms. With the development of wearable sensor devices having wireless transmission capabilities, there is a need to develop real time systems able to accurately analyze ECG and detect cardiac abnormalities. In this paper, we propose a new CVD detection system using Wireless Body Area Networks (WBAN) technology. This system processes the captured ECG using filtering and Undecimated Wavelet Transform (UWT) techniques to remove noises and extract nine main ECG diagnosis parameters, then the system uses a Bayesian Network Classifier model to classify ECG based on its parameters into four different classes: Normal, Premature Atrial Contraction (PAC), Premature Ventricular Contraction (PVC) and Myocardial Infarction (MI). The experimental results on ECGs from real patients databases show that the average detection rate (TPR) is 96.1% for an average false alarm rate (FPR) of 1.3%.",
"title": ""
},
{
"docid": "d01fe3897f0f09fc023d943ece518e6e",
"text": "In this paper, we propose an efficient lane detection algorithm for lane departure detection; this algorithm is suitable for low computing power systems like automobile black boxes. First, we extract candidate points, which are support points, to extract a hypotheses as two lines. In this step, Haar-like features are used, and this enables us to use an integral image to remove computational redundancy. Second, our algorithm verifies the hypothesis using defined rules. These rules are based on the assumption that the camera is installed at the center of the vehicle. Finally, if a lane is detected, then a lane departure detection step is performed. As a result, our algorithm has achieved 90.16% detection rate; the processing time is approximately 0.12 milliseconds per frame without any parallel computing.",
"title": ""
},
{
"docid": "787d265715ac892e1c969cb098497808",
"text": "Presence in virtual environments Presence is usually defined as the subjective sense of being and acting in a virtual environment (Slater, Usoh, & Steed, 1994). It has been shown that presence depends on certain features of the virtual environment, for instance which interactions are possible, and which interaction techniques are used (Regenbrecht & Schubert, 2002). A common assumption in research on the sense of presence is that the better the technical immersion of the user in terms of sensory fidelity, real time, field of view and picture quality, the higher the experienced presence. In short, the experienced sense of presence is seen as a function of the quality of immersion. Comparisons of different VR systems have confirmed this hypothesis (e.g., Witmer & Singer, 1998).",
"title": ""
},
{
"docid": "f13d3c01729d9f3dcb2b220a0fcce902",
"text": "User generated content on Twitter (produced at an enormous rate of 340 million tweets per day) provides a rich source for gleaning people's emotions, which is necessary for deeper understanding of people's behaviors and actions. Extant studies on emotion identification lack comprehensive coverage of \"emotional situations\" because they use relatively small training datasets. To overcome this bottleneck, we have automatically created a large emotion-labeled dataset (of about 2.5 million tweets) by harnessing emotion-related hash tags available in the tweets. We have applied two different machine learning algorithms for emotion identification, to study the effectiveness of various feature combinations as well as the effect of the size of the training data on the emotion identification task. Our experiments demonstrate that a combination of unigrams, big rams, sentiment/emotion-bearing words, and parts-of-speech information is most effective for gleaning emotions. The highest accuracy (65.57%) is achieved with a training data containing about 2 million tweets.",
"title": ""
},
{
"docid": "bc30f1eb3c002e2cbae2c36cfbaa8550",
"text": "We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB). Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference. To accomplish this, eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network. All intermediate results from a layer are stored as binary values, as opposed to floating-points used in current BNN implementations, leading to a 32x reduction in required temporary space. We provide empirical evidence that our proposed eBNN approach allows efficient inference (10s of ms) on devices with severely limited memory (10s of KB). For example, eBNN achieves 95% accuracy on the MNIST dataset running on an Intel Curie with only 15 KB of usable memory with an inference runtime of under 50 ms per sample. To ease the development of applications in embedded contexts, we make our source code available that allows users to train and discover eBNN models for a learning task at hand, which fit within the memory constraint of the target device.",
"title": ""
}
] |
scidocsrr
|
5d1d3fb168e739c7167a78df984aa154
|
SALAD: Achieving Symmetric Access Latency with Asymmetric DRAM Architecture
|
[
{
"docid": "356361bf2ca0e821250e4a32d299d498",
"text": "DRAM has been a de facto standard for main memory, and advances in process technology have led to a rapid increase in its capacity and bandwidth. In contrast, its random access latency has remained relatively stagnant, as it is still around 100 CPU clock cycles. Modern computer systems rely on caches or other latency tolerance techniques to lower the average access latency. However, not all applications have ample parallelism or locality that would help hide or reduce the latency. Moreover, applications' demands for memory space continue to grow, while the capacity gap between last-level caches and main memory is unlikely to shrink. Consequently, reducing the main-memory latency is important for application performance. Unfortunately, previous proposals have not adequately addressed this problem, as they have focused only on improving the bandwidth and capacity or reduced the latency at the cost of significant area overhead.\n We propose asymmetric DRAM bank organizations to reduce the average main-memory access latency. We first analyze the access and cycle times of a modern DRAM device to identify key delay components for latency reduction. Then we reorganize a subset of DRAM banks to reduce their access and cycle times by half with low area overhead. By synergistically combining these reorganized DRAM banks with support for non-uniform bank accesses, we introduce a novel DRAM bank organization with center high-aspect-ratio mats called CHARM. Experiments on a simulated chip-multiprocessor system show that CHARM improves both the instructions per cycle and system-wide energy-delay product up to 21% and 32%, respectively, with only a 3% increase in die area.",
"title": ""
}
] |
[
{
"docid": "ae6a3b7943c0611538192c49ae3e57c9",
"text": "Mindfulness, a concept originally derived from Buddhist psychology, is essential for some well-known clinical interventions. Therefore an instrument for measuring mindfulness is useful. We report here on two studies constructing and validating the Freiburg Mindfulness Inventory (FMI) including a short form. A preliminary questionnaire was constructed through expert interviews and extensive literature analysis and tested in 115 subjects attending mindfulness meditation retreats. This psychometrically sound 30-item scale with an internal consistency of Cronbach alpha = .93 was able to significantly demonstrate the increase in mindfulness after the retreat and to discriminate between experienced and novice meditators. In a second study we broadened the scope of the concept to 86 subjects without meditation experience, 117 subjects with clinical problems, and 54 participants from retreats. Reducing the scale to a short form with 14 items resulted in a semantically robust and psychometrically stable (alpha = .86) form. Correlation 0191-8869/$ see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.11.025 * Corresponding author. Address: University of Northampton, School of Social Sciences, Division of Psychology and Samueli Institute—European Office, Boughton Green Road, Northampton NN2 7AL, UK. E-mail address: harald.walach@northampton.ac.uk (H. Walach). www.elsevier.com/locate/paid Personality and Individual Differences 40 (2006) 1543–1555 with other relevant constructs (self-awareness, dissociation, global severity index, meditation experience in years) was significant in the medium to low range of correlations and lends construct validity to the scale. Principal Component Analysis suggests one common factor. This short scale is sensitive to change and can be used also with subjects without previous meditation experience. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cb5d0498db49c8421fef279aea69c367",
"text": "The growing commoditization of the underground economy has given rise to malware delivery networks, which charge fees for quickly delivering malware or unwanted software to a large number of hosts. A key method to provide this service is through the orchestration of silent delivery campaigns. These campaigns involve a group of downloaders that receive remote commands and then deliver their payloads without any user interaction. These campaigns can evade detection by relying on inconspicuous downloaders on the client side and on disposable domain names on the server side. We describe Beewolf, a system for detecting silent delivery campaigns from Internet-wide records of download events. The key observation behind our system is that the downloaders involved in these campaigns frequently retrieve payloads in lockstep. Beewolf identifies such locksteps in an unsupervised and deterministic manner, and can operate on streaming data. We utilize Beewolf to study silent delivery campaigns at scale, on a data set of 33.3 million download events. This investigation yields novel findings, e.g. malware distributed through compromised software update channels, a substantial overlap between the delivery ecosystems for malware and unwanted software, and several types of business relationships within these ecosystems. Beewolf achieves over 92% true positives and fewer than 5% false positives. Moreover, Beewolf can detect suspicious downloaders a median of 165 days ahead of existing anti-virus products and payload-hosting domains a median of 196 days ahead of existing blacklists.",
"title": ""
},
{
"docid": "93ec0a392a7a29312778c6834ffada73",
"text": "BACKGROUND\nThe new world of safe aesthetic injectables has become increasingly popular with patients. Not only is there less risk than with surgery, but there is also significantly less downtime to interfere with patients' normal work and social schedules. Botulinum toxin (BoNT) type A (BoNTA) is an indispensable tool used in aesthetic medicine, and its broad appeal has made it a hallmark of modern culture. The key to using BoNTA to its best effect is to understand patient-specific factors that will determine the treatment plan and the physician's ability to personalize injection strategies.\n\n\nOBJECTIVES\nTo present international expert viewpoints and consensus on some of the contemporary best practices in aesthetic BoNTA, so that beginner and advanced injectors may find pearls that provide practical benefits.\n\n\nMETHODS AND MATERIALS\nExpert aesthetic physicians convened to discuss their approaches to treatment with BoNT. The discussions and consensus from this meeting were used to provide an up-to-date review of treatment strategies to improve patient results. Information is presented on patient management and assessment, documentation and consent, aesthetic scales, injection strategies, dilution, dosing, and adverse events.\n\n\nCONCLUSION\nA range of product- and patient-specific factors influence the treatment plan. Truly optimized outcomes are possible only when the treating physician has the requisite knowledge, experience, and vision to use BoNTA as part of a unique solution for each patient's specific needs.",
"title": ""
},
{
"docid": "6d23403a93c0da233b1dbae581e55abb",
"text": "This paper proposes an intelligent trading system using support vector regression optimized by genetic algorithms (SVR-GA) and multilayer perceptron optimized with GA (MLP-GA). Experimental results show that both approaches outperform conventional trading systems without prediction and a recent fuzzy trading system in terms of final equity and maximum drawdown for Hong Kong Hang Seng stock index.",
"title": ""
},
{
"docid": "b6e1ab2729f1a9d195f85a5b0cfad41c",
"text": "Purpose – The paper aims to present a conceptual model that better defines critical success factors to ERP implementation organized with the technology, organization and environment (TOE) framework. The paper also adds to current literature the critical success factor of trust with the vendor, system and consultant which has largely been ignored in the past. Design/methodology/approach – The paper uses past literature and theoretical and conceptual framework development to illustrate a new conceptual model that incorporates critical success factors that have both been empirically tied to ERP implementation success in the past and new insights into how trust impacts ERP implementation success. Findings – The paper finds a lack of research depicted in how trust impacts ERP implementation success and likewise a lack of a greater conceptual model organized to provide insight into ERP implementation success. Originality/value – The paper proposes a holistic conceptual framework for ERP implementation success and discusses the impact that trust with the vendor, system and consultant has on ERP implementation success.",
"title": ""
},
{
"docid": "3dd8c177ae928f7ccad2aa980bd8c747",
"text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.",
"title": ""
},
{
"docid": "03b86b3a70391a84307854a666a5eb62",
"text": "Many well-established recommender systems are based on representation learning in Euclidean space. In these models, matching functions such as the Euclidean distance or inner product are typically used for computing similarity scores between user and item embeddings. This paper investigates the notion of learning user and item representations in Hyperbolic space. In this paper, we argue that Hyperbolic space is more suitable for learning user-item embeddings in the recommendation domain. Unlike Euclidean spaces, Hyperbolic spaces are intrinsically equipped to handle hierarchical structure, encouraged by its property of exponentially increasing distances away from origin. We propose HyperBPR (Hyperbolic Bayesian Personalized Ranking), a conceptually simple but highly effective model for the task at hand. Our proposed HyperBPR not only outperforms their Euclidean counterparts, but also achieves state-of-the-art performance on multiple benchmark datasets, demonstrating the effectiveness of personalized recommendation in Hyperbolic space.",
"title": ""
},
{
"docid": "fe70c7614c0414347ff3c8bce7da47e7",
"text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.",
"title": ""
},
{
"docid": "663925d096212c6ea6685db879581551",
"text": "Deep neural networks have shown promise in collaborative filtering (CF). However, existing neural approaches are either user-based or item-based, which cannot leverage all the underlying information explicitly. We propose CF-UIcA, a neural co-autoregressive model for CF tasks, which exploits the structural correlation in the domains of both users and items. The co-autoregression allows extra desired properties to be incorporated for different tasks. Furthermore, we develop an efficient stochastic learning algorithm to handle large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens 1M and Netflix, and achieve state-of-the-art performance in both rating prediction and top-N recommendation tasks, which demonstrates the effectiveness of CF-UIcA.",
"title": ""
},
{
"docid": "2798217f6e2d9194a9a30834ed9af47a",
"text": "The main obstacle to transmit images in wireless sensor networks is the lack of an appropriate strategy for processing the large volume of data such as images. The high rate packets errors because of what numbers very high packets carrying the data of the captured images and the need for retransmission in case of errors, and more, the energy reserve and band bandwidth is insufficient to accomplish these tasks. This paper presents new effective technique called “Background subtraction” to compress, process and transmit the images in a wireless sensor network. The practical results show the effectiveness of this approach to make the image compression in the networks of wireless sensors achievable, reliable and efficient in terms of energy and the minimization of amount of image data.",
"title": ""
},
{
"docid": "9e8e57ef22d3dfe139f4b9c9992b0884",
"text": "It has been suggested that when the variance assumptions of a repeated measures ANOVA are not met, the df of the mean square ratio should be adjusted by the sample estimate of the Box correction factor, e. This procedure works well when e is low, but the estimate is seriously biased when this is not the case. An alternate estimate is proposed which is shown by Monte Carlo methods to be less biased for moderately large e.",
"title": ""
},
{
"docid": "1eaaca1a273dccf54f31880c8aa97f5c",
"text": "It has been proposed that allocating procedure activation records on a garbage collected heap is more e cient than stack allocation. However, previous comparisons of heap vs. stack allocation have been over-simplistic, neglecting (for example) frame pointers, or the better locality of reference of stacks. We present a comprehensive analysis of all the components of creation, access, and disposal of heap-allocated and stack-allocated activation records. Among our results are: Although stack frames are known to have a better cache read-miss rate than heap frames, our simple analytical model (backed up by simulation results) shows that the di erence is too trivial to matter. The cache write-miss rate of heap frames is very high; we show that a variety of miss-handling strategies (exempli ed by speci c modern machines) can give good performance, but not all can. The write-miss policy of the primary cache is much more important than the write-miss policy of the secondary cache. Stacks restrict the exibility of closure representations (for higher-order functions) in important (and costly) ways. The extra load placed on the garbage collector by heap-allocated frames is very small. The demands of modern programming languages make stacks quite complicated to implement e ciently and correctly. Overall, the execution cost of stack-allocated and heap-allocated frames is very similar; but heap frames are simpler to implement and allow very e cient rst-class continuations (call/cc). 1 Garbage-collected frames In a programming language implementation that uses garbage collection, all procedure activation records (frames) can be allocated on the heap. This is quite convenient for higher-order languages (Scheme, ML, etc.) whose \\closures\" can have inde nite extent, and it is even more convenient for languages with rst-class continuations. One might think that it would be expensive to allocate, at every procedure call, heap storage that becomes garbage on return. But not necessarily [2]: modern generational garbage-collection algorithms[31] can reclaim dead frames extremely e ciently, even cheaper than the one-instruction cost to pop the stack. But there are many other costs involved in creating, accessing, and destroying activation records| whether on a heap or a stack. These costs are summarized in Figure 1, and explained and analyzed in the remainder of the paper. These numbers depend on many assumptions. The most critical assumptions are these: The runtime system in question has static scope, higher order functions, and garbage collection. The only question being investigated is whether there is an activation-record stack in addition to the garbage collection of other objects. The compiler and garbage collector are required to be \\safe for space complexity;\" that is, statically dead pointers (in the data ow sense) do not keep objects live. There are few side e ects in compiled programs, so that generational garbage collection will be e cient. These assumptions, and many others, will be explained in the rest of the paper.",
"title": ""
},
{
"docid": "1538ff59f18c6e6bc98acedb08ab5f78",
"text": "Radar theory and radar system have developed a lot for the last 50 years or so. Recently, a new concept in array radar has been introduced by the multiple-input multiple-output (MIMO) radar, which has the potential to dramatically improve the performance of radars in parameters estimation. While an earlier appeared concept, synthetic impulse and aperture radar (SIAR) is a typical kind of MIMO radar and probes a channel by transmitting multiple signals separated both spectrally and spatially. To the best knowledge of the authors, almost all the analyses available are based on the simple linear array while our SIAR system is based on a circular array. This paper first introduces the recent research and development in and the features of MIMO radars, then discusses our SIAR system as a specific example of MIMO system and finally the unique advantages of SIAR are listed",
"title": ""
},
{
"docid": "edf03a85dea4d2ef977be2551b5fea24",
"text": "A fungus causing conspicuous leaf spots of Prunus serotina was recently collected in Surrey, UK. It proved to represent an undescribed species which also cannot be referred to any known genus. The species is described as Miricatena prunicola gen. et sp. nov.",
"title": ""
},
{
"docid": "a8920f6ba4500587cf2a160b8d91331a",
"text": "In this paper, we present an approach that can handle Z-numbers in the context of multi-criteria decision-making problems. The concept of Z-number as an ordered pair Z=(A, B) of fuzzy numbers A and B is used, where A is a linguistic value of a variable of interest and B is a linguistic value of the probability measure of A. As human beings, we communicate with each other by means of natural language using sentences like “the journey from home to university most likely takes about half an hour.” The Z-numbers are converted to fuzzy numbers. Then the Z-TODIM and Z-TOPSIS are presented as a direct extension of the fuzzy TODIM and fuzzy TOPSIS, respectively. The proposed methods are applied to two case studies and compared with the standard approach using crisp values. The results obtained show the feasibility of the approach.",
"title": ""
},
{
"docid": "5515e892363c3683e39c6d5ec4abe22d",
"text": "Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ‘‘ex cursus’’ of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9a397ca2a072d9b1f861f8a6770aa792",
"text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.",
"title": ""
},
{
"docid": "c704bef058d6e6728612f48874867ec6",
"text": "The machine-to-machine (M2M) communication, which plays a vital role in the Internet of Things (IoT), allows wireless and wired systems to monitor environments and exchange the information among various machines automatically without human interventions. In order to promote the development of the IoT and exploit the M2M applications, the Internet Engineering Task Force (IETF) has been developing a standard named Internet Protocol version 6 (IPv6) over low-power wireless personal area networks (6LoWPAN) to enable IP-based M2M devices to connect to the open Internet. Although the 6LoWPAN standard has specified the important issues in the M2M communications, various security issues have not been addressed. In this paper, an enhanced mutual authentication and key establishment scheme is designed for the M2M communications in 6LoWPAN networks. The proposed scheme enables a 6LoWPAN device to securely authenticate with the remote server with a session key established between them. The security proof by the protocol composition logic can prove the logic correctness of the proposed scheme. The formal verification and the simulation show that the proposed scheme in 6LoWPAN could not only enhance the security functionality with the ability to prevent various malicious attacks, but also incur less computational and transmission overhead.",
"title": ""
},
{
"docid": "c3f6e26eb8cccde1b462e2ab6bb199c3",
"text": "Scale-out distributed storage systems have recently gained high attentions with the emergence of big data and cloud computing technologies. However, these storage systems sometimes suffer from performance degradation, especially when the communication subsystem is not fully optimized. The problem becomes worse as the network bandwidth and its corresponding traffic increase. In this paper, we first conduct an extensive analysis of communication subsystem in Ceph, an object-based scale-out distributed storage system. Ceph uses asynchronous messenger framework for inter-component communication in the storage cluster. Then, we propose three major optimizations to improve the performance of Ceph messenger. These include i) deploying load balancing algorithm among worker threads based on the amount of workloads, ii) assigning multiple worker threads (we call dual worker) per single connection to maximize the overlapping activity among threads, and iii) using multiple connections between storage servers to maximize bandwidth usage, and thus reduce replication overhead. The experimental results show that the optimized Ceph messenger outperforms the original messenger implementation up to 40% in random writes with 4K messages. Moreover, Ceph with optimized communication subsystem shows up to 13% performance improvement as compared to original Ceph.",
"title": ""
},
{
"docid": "9fa53682b83e925409ea115569494f70",
"text": "Circuit techniques for enabling a sub-0.9 V logic-compatible embedded DRAM (eDRAM) are presented. A boosted 3T gain cell utilizes Read Word-line (RWL) preferential boosting to increase read margin and improve data retention time. Read speed is enhanced with a hybrid current/voltage sense amplifier that allows the Read Bit-line (RBL) to remain close to VDD. A regulated bit-line write scheme for driving the Write Bit-line (WBL) is equipped with a steady-state storage node voltage monitor to overcome the data `1' write disturbance problem of the PMOS gain cell without introducing another boosted supply for the Write Word-line (WWL) over-drive. An adaptive and die-to-die adjustable read reference bias generator is proposed to cope with PVT variations. Monte Carlo simulations compare the 6-sigma read and write performance of proposed eDRAM against conventional designs. Measurement results from a 64 kb eDRAM test chip implemented in a 65 nm low-leakage CMOS process show a 1.25 ms data retention time with a 2 ns random cycle time at 0.9 V, 85°C, and a 91.3 μW per Mb static power dissipation at 1.0 V, 85°C.",
"title": ""
}
] |
scidocsrr
|
20918f720c631c016d2557355be6cec3
|
Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts
|
[
{
"docid": "cd3f003e11fa13b794a28cf60772f248",
"text": "Collaborative Filtering aims to predict user tastes, by minimising the mean error produced when predicting hidden user ratings. The aim of a deployed recommender system is to iteratively predict users' preferences over a dynamic, growing dataset, and system administrators are confronted with the problem of having to continuously tune the parameters calibrating their CF algorithm. In this work, we formalise CF as a time-dependent, iterative prediction problem. We then perform a temporal analysis of the Netflix dataset, and evaluate the temporal performance of two CF algorithms. We show that, due to the dynamic nature of the data, certain prediction methods that improve prediction accuracy on the Netflix probe set do not show similar improvements over a set of iterative train-test experiments with growing data. We then address the problem of parameter selection and update, and propose a method to automatically assign and update per-user neighbourhood sizes that (on the temporal scale) outperforms setting global parameters.",
"title": ""
},
{
"docid": "9df0cdd0273b19737de0591310131bff",
"text": "We present freely available open-source toolkit for training recurrent neural network based language models. I t can be easily used to improve existing speech recognition and ma chine translation systems. Also, it can be used as a baseline for fu ture research of advanced language modeling techniques. In the p a er, we discuss optimal parameter selection and different modes of functionality. The toolkit, example scripts and basic setups are freely available at http://rnnlm.sourceforge.net/. I. I NTRODUCTION, MOTIVATION AND GOALS Statistical language modeling attracts a lot of attention, as models of natural languages are important part of many practical systems today. Moreover, it can be estimated that with further research progress, language models will becom e closer to human understanding [1] [2], and completely new applications will become practically realizable. Immedia tely, any significant progress in language modeling can be utilize d in the esisting speech recognition and statistical machine translation systems. However, the whole research field struggled for decades to overcome very simple, but also effective models based on ngram frequencies [3] [4]. Many techniques were developed to beat n-grams, but the improvements came at the cost of computational complexity. Moreover, the improvements wer e often reported on very basic systems, and after application to state-of-the-art setups and comparison to n-gram models trained on large amounts of data, improvements provided by many techniques vanished. This has lead to scepticism among speech recognition researchers. In our previous work, we have compared many major advanced language modeling techniques, and found that neur al network based language models (NNLM) perform the best on several standard setups [5]. Models of this type were introduced by Bengio in [6], about ten years ago. Their main weaknesses were huge computational complexity, and nontrivial implementation. Successful training of neural net works require well chosen hyper-parameters, such as learning rat e and size of hidden layer. To help overcome these basic obstacles, we have decided to release our toolkit for training recurrent neural network b ased language models (RNNLM). We have shown that the recurrent architecture outperforms the feedforward one on several se tup in [7]. Moreover, the implemenation is simple and easy to understand. The most importantly, recurrent neural networ ks are very interesting from the research point of view, as they allow effective processing of sequences and patterns with arbitraty length these models can learn to store informati on in the hidden layer. Recurrent neural networks can have memory , and are thus important step forward to overcome the most painful and often criticized drawback of n-gram models dependence on previous two or three words only. In this paper we present an open source and freely available toolkit for training statistical language models base d or recurrent neural networks. It includes techniques for redu cing computational complexity (classes in the output layer and direct connections between input and output layer). Our too lkit has been designed to provide comparable results to the popul ar toolkit for training n-gram models, SRILM [8]. The main goals for the RNNLM toolkit are these: • promotion of research of advanced language modeling techniques • easy usage • simple portable code without any dependencies • computational efficiency In the paper, we describe how to easily make RNNLM part of almost any speech recognition or machine translation syste m that produces lattices. II. RECURRENTNEURAL NETWORK The recurrent neural network architecture used in the toolk it is shown at Figure 1 (usually called Elman network, or simple RNN). The input layer uses the 1-of-N representation of the previous wordw(t) concatenated with previous state of the hidden layers(t − 1). The neurons in the hidden layer s(t) use sigmoid activation function. The output layer (t) has the same dimensionality as w(t), and after the network is trained, it represents probability distribution of the next word giv en the previous word and state of the hidden layer in the previous time step [9]. The class layer c(t) can be optionally used to reduce computational complexity of the model, at a small cost of accuracy [7]. Training is performed by the standard stochastic gradient descent algorithm, and the matrix W that",
"title": ""
},
{
"docid": "841a5ecba126006e1deb962473662788",
"text": "In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.",
"title": ""
},
{
"docid": "7e6182248b3c3d7dedce16f8bfa58b28",
"text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.",
"title": ""
}
] |
[
{
"docid": "bba81ac392b87a123a1e2f025bffd30c",
"text": "This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to http://www.deakin.edu.au/~thanhthi/drl.htm.",
"title": ""
},
{
"docid": "9b72d423e13bdd125b3a8c30b40e6b49",
"text": "With the increasing popularity of the web, some new web technologies emerged and introduced dynamics to web applications, in comparison to HTML, as a static programming language. JavaScript is the language that provided a dynamic web site which actively communicates with users. JavaScript is used in today's web applications as a client script language and on the server side. The JavaScript language supports the Model View Controller (MVC) architecture that maintains a readable code and clearly separates parts of the program code. The topic of this research is to compare the popular JavaScript frameworks: AngularJS, Ember, Knockout, Backbone. All four frameworks are based on MVC or similar architecture. In this paper, the advantages and disadvantages of each framework, the impact on application speed, the ways of testing such JS applications and ways to improve code security are presented.",
"title": ""
},
{
"docid": "7548b99b332677e01ca6d74592f62ab1",
"text": "This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the \"RobotCub\" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project \"ITALK\" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.",
"title": ""
},
{
"docid": "ae38bb46fd3ceed3f4800b6421b45d74",
"text": "Medicinal data mining methods are used to analyze the medical data information resources. Medical data mining content mining and structure methods are used to analyze the medical data contents. The effort to develop knowledge and experience of frequent specialists and clinical selection data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. Diagnosis of heart disease is a significant and tedious task in medicine. The term Heart disease encompasses the various diseases that affect the heart. The exposure of heart disease from various factors or symptom is an issue which is not complimentary from false presumptions often accompanied by unpredictable effects. Association rule mining procedures are used to extract item set relations. Item set regularities are used in the rule mining process. The data classification is based on MAFIA algorithms which result in accuracy, the data is evaluated using entropy based cross validations and partition techniques and the results are compared. Here using the C4.5 algorithm as the training algorithm to show rank of heart attack with the decision tree. Finally, the heart disease database is clustered using the K-means clustering algorithm, which will remove the data applicable to heart attack from the database. The results showed that the medicinal prescription and designed prediction system is capable of prophesying the heart attack successfully.",
"title": ""
},
{
"docid": "582738ff2d1369a7faf9480e5af9a717",
"text": "Deep learning has led to significant advances in artificial intelligence in recent years, in part by adopting architectures and functions motivated by neurophysiology. However, current deep learning algorithms are biologically infeasible, because they assume non-spiking units, discontinuous-time, and non-local synaptic weight updates. Here, we build on recent discoveries in artificial neural networks to develop a spiking, continuous-time neural network model that learns to categorize images from the MNIST data-set with local synaptic weight updates. The model achieves this via a three-compartment cellular architecture, motivated by neocortical pyramidal cell neurophysiology, wherein feedforward sensory information and feedback from higher layers are received at separate compartments in the neurons. We show that, thanks to the separation of feedforward and feedback information in different dendrites, our learning algorithm can coordinate learning across layers, taking advantage of multilayer architectures to identify abstract categories—the hallmark of deep learning. Our model demonstrates that deep learning can be achieved within a biologically feasible framework using segregated dendritic compartments, which may help to explain the anatomy of neocortical pyramidal neurons.",
"title": ""
},
{
"docid": "97de6efcdba528f801cbfa087498ab3f",
"text": "Abstract: Educational Data Mining refers to techniques, tools, and research designed for automatically extracting meaning from large repositories of data generated by or related to people' learning activities in educational settings.[1] It is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.[2]",
"title": ""
},
{
"docid": "824b0e8a66699965899169738df7caa9",
"text": "Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.",
"title": ""
},
{
"docid": "ec6b6463fdbabbaade4c9186b14e7acf",
"text": "In order for robots to learn from people with no machine learning expertise, robots should learn from natural human instruction. Most machine learning techniques that incorporate explanations require people to use a limited vocabulary and provide state information, even if it is not intuitive. This paper discusses a software agent that learned to play the Mario Bros. game using explanations. Our goals to improve learning from explanations were twofold: 1) to filter explanations into advice and warnings and 2) to learn policies from sentences without state information. We used sentiment analysis to filter explanations into advice of what to do and warnings of what to avoid. We developed object-focused advice to represent what actions the agent should take when dealing with objects. A reinforcement learning agent used object-focused advice to learn policies that maximized its reward. After mitigating false negatives, using sentiment as a filter was approximately 85% accurate. object-focused advice performed better than when no advice was given, the agent learned where to apply the advice, and the agent could recover from adversarial advice. We also found the method of interaction should be designed to ease the cognitive load of the human teacher or the advice may be of poor quality.",
"title": ""
},
{
"docid": "b71ffe031d4767aa08e2fdb317563bc7",
"text": "Fat-tailed sheep come in various colours—most are either brown (tan) or black. In some, most of the body is white with the tan or black colour restricted to the front portion of the body or to just around the eyes, muzzle and parts of the legs. The Karakul breed is important for the production of lamb skins of various colours for the fashion industry. As well as the black and tan colours there are Karakuls bred for grey or roan shades, a white colour or one of the numerous Sur shades. In the Sur shades, the base of the birthcoat fibre is one of a number of dark shades and the tip a lighter or white shade. All these colours and many others are the result of the interaction of various genes that determine the specifics of the coat colour of the sheep. A number of sets of nomenclature and symbols have been used to represent the various loci and their alleles that are involved. In the 1980s and 1990s, a standardised set, based closely on those of the mouse and other species was developed. Using this as the framework, the alleles of the Extension, Agouti, Brown, Spotting, Pigmented Head and Roan loci are described using fat-tailed sheep (mainly Damara, Karakul and Persian) as examples. Further discussion includes other types of “white markings,” the Ticking locus and the Sur loci.",
"title": ""
},
{
"docid": "33f0b65a820446cf957d44ba73cb1e88",
"text": "It is well known that in order to calibrate a single camera with a one-dimensional (1D) calibration object, the object must undertake some constrained motions, in other words, it is impossible to calibrate a single camera if the object motion is of general one. For a multi-camera setup, i.e., when the number of camera is more than one, can the cameras be calibrated by a 1D object under general motions? In this work, we prove that all cameras can indeed be calibrated and a calibration algorithm is also proposed and experimentally tested. In contrast to other multi-camera calibration method, no one calibrated \"base\" camera is needed. In addition, we show that for such multi-camera cases, the minimum condition of calibration and critical motions are similar to those of calibrating a single camera with 1D calibration object.",
"title": ""
},
{
"docid": "2bc30693be1c5855a9410fb453128054",
"text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"title": ""
},
{
"docid": "65b986cbfe1c3668b0cdea4321e4921e",
"text": "Once a bug in software is reported, developers have to determine which source files are related to the bug. This process is referred to as bug localization, and an automatic way of bug localization is important to improve developers' productivity. This paper proposes an approach called DrewBL to efficiently localize faulty files for a given bug report using a natural language processing tool, word2vec. In DrewBL, we first build a vector space model named semantic-VSM which represents a distributed representation of words in the bug report and source code files and next compute the relevance between them by feeding the constructed model to word2vec. We also present an approach called CombBL to further improve the accuracy of bug localization which employs not only the proposed DrewBL but also existing bug localization techniques, such as BugLocator based on textual similarity and Bugspots based on bug-fixing history, in a combinational manner. This paper gives our early experimental results to show the effectiveness and efficiency of the proposed approaches using two open source projects.",
"title": ""
},
{
"docid": "0249db106163559e34ff157ad6d45bf5",
"text": "We present an interpolation-based planning and replanning algorithm for generating low-cost paths through uniform and nonuniform resolution grids. Most grid-based path planners use discrete state transitions that artificially constrain an agent’s motion to a small set of possible headings e.g., 0, /4, /2, etc. . As a result, even “optimal” gridbased planners produce unnatural, suboptimal paths. Our approach uses linear interpolation during planning to calculate accurate path cost estimates for arbitrary positions within each grid cell and produce paths with a range of continuous headings. Consequently, it is particularly well suited to planning low-cost trajectories for mobile robots. In this paper, we introduce a version of the algorithm for uniform resolution grids and a version for nonuniform resolution grids. Together, these approaches address two of the most significant shortcomings of grid-based path planning: the quality of the paths produced and the memory and computational requirements of planning over grids. We demonstrate our approaches on a number of example planning problems, compare them to related algorithms, and present several implementations on real robotic systems. © 2006 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "528a22ba860fd4ad4da3773ff2b01dcd",
"text": "During the last decade it has become more widely accepted that pet ownership and animal assistance in therapy and education may have a multitude of positive effects on humans. Here, we review the evidence from 69 original studies on human-animal interactions (HAI) which met our inclusion criteria with regard to sample size, peer-review, and standard scientific research design. Among the well-documented effects of HAI in humans of different ages, with and without special medical, or mental health conditions are benefits for: social attention, social behavior, interpersonal interactions, and mood; stress-related parameters such as cortisol, heart rate, and blood pressure; self-reported fear and anxiety; and mental and physical health, especially cardiovascular diseases. Limited evidence exists for positive effects of HAI on: reduction of stress-related parameters such as epinephrine and norepinephrine; improvement of immune system functioning and pain management; increased trustworthiness of and trust toward other persons; reduced aggression; enhanced empathy and improved learning. We propose that the activation of the oxytocin system plays a key role in the majority of these reported psychological and psychophysiological effects of HAI. Oxytocin and HAI effects largely overlap, as documented by research in both, humans and animals, and first studies found that HAI affects the oxytocin system. As a common underlying mechanism, the activation of the oxytocin system does not only provide an explanation, but also allows an integrative view of the different effects of HAI.",
"title": ""
},
{
"docid": "89322e0d2b3566aeb85eeee9f505d5b2",
"text": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease.",
"title": ""
},
{
"docid": "1a620e17048fa25cfc54f5c9fb821f39",
"text": "The performance of a detector depends much on its training dataset and drops significantly when the detector is applied to a new scene due to the large variations between the source training dataset and the target scene. In order to bridge this appearance gap, we propose a deep model to automatically learn scene-specific features and visual patterns in static video surveillance without any manual labels from the target scene. It jointly learns a scene-specific classifier and the distribution of the target samples. Both tasks share multi-scale feature representations with both discriminative and representative power. We also propose a cluster layer in the deep model that utilizes the scenespecific visual patterns for pedestrian detection. Our specifically designed objective function not only incorporates the confidence scores of target training samples but also automatically weights the importance of source training samples by fitting the marginal distributions of target samples. It significantly improves the detection rates at 1 FPPI by 10% compared with the state-of-the-art domain adaptation methods on MIT Traffic Dataset and CUHK Square Dataset.",
"title": ""
},
{
"docid": "fe0fa94ce6f02626fca12f21b60bec46",
"text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be",
"title": ""
},
{
"docid": "9b4c240bd55523360e92dbed26cb5dc2",
"text": "CBT has been seen as an alternative to the unmanageable population of undergraduate students in Nigerian universities. This notwithstanding, the peculiar nature of some courses hinders its total implementation. This study was conducted to investigate the students’ perception of CBT for undergraduate chemistry courses in University of Ilorin. To this end, it examined the potential for using student feedback in the validation of assessment. A convenience sample of 48 students who had taken test on CBT in chemistry was surveyed and questionnaire was used for data collection. Data analysis demonstrated an auspicious characteristics of the target context for the CBT implementation as majority (95.8%) of students said they were competent with the use of computers and 75% saying their computer anxiety was only mild or low but notwithstanding they have not fully accepted the testing mode with only 29.2% in favour of it, due to the impaired validity of the test administration which they reported as being many erroneous chemical formulas, equations and structures in the test items even though they have nonetheless identified the achieved success the testing has made such as immediate scoring, fastness and transparency in marking. As quality of designed items improves and sufficient time is allotted according to the test difficulty, the test experience will become favourable for students and subsequently CBT will gain its validation in this particular context.",
"title": ""
},
{
"docid": "37dc459d820ebd8234d1dafd0924b894",
"text": "We present SBFT: a scalable decentralized trust infrastructure for Blockchains. SBFT implements a new Byzantine fault tolerant algorithm that addresses the challenges of scalability and decentralization. Unlike many previous BFT systems that performed well only when centralized around less than 20 replicas, SBFT is optimized for decentralization and can easily handle more than 100 active replicas. SBFT provides a smart contract execution environment based on Ethereum’s EVM byte-code. We tested SBFT by running 1 million EVM smart contract transactions taken from a 4-month real-world Ethereum workload. In a geo-replicated deployment that has about 100 replicas and can withstand f = 32 Byzantine faults our system shows speedups both in throughput and in latency. SBFT completed this execution at a rate of 50 transactions per second. This is a 10× speedup compared to Ethereum current limit of 5 transactions per second. SBFT latency to commit a smart contract execution and make it final is sub-second, this is more than 10× speedup compared to Ethereum current > 15 second block generation for registering a smart contract execution and several orders of magnitude speedup relative to Proof-of-Work best-practice finality latency of one-hour.",
"title": ""
},
{
"docid": "6476066913e37c88e94cc83c15b05f43",
"text": "The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don’t balance the modal fusion and temporal fusion, or even haven’t temporal fusion; (2)The architecture of these models isn’t end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested in one time, alternatively easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments shows that am-LSTM is much better than traditional methods and other DNN models in three datasets: AVLetters, AVLetters2, AVDigits.",
"title": ""
}
] |
scidocsrr
|
224cbc125cfbc291d19fe1f6da5c81df
|
Human Factors in Agile Software Development
|
[
{
"docid": "221cd488d735c194e07722b1d9b3ee2a",
"text": "HURTS HELPS HURTS HELPS Data Type [Target System] Implicit HELPS HURTS HURTS BREAKS ? Invocation [Target System] Pipe & HELPS BREAKS BREAKS HELPS Filter WHEN [Target condl System] condl: size of data in domain is huge Figure 13.4. A generic Correlation Catalogue, based on [Garlan93]. Figure 13.3 shows a method which decomposes the topic on process, including algorithms as used in [Garlan93]. Decomposition methods for processes are also described in [Nixon93, 94a, 97a], drawing on implementations of processes [Chung84, 88]. These two method definitions are unparameterized. A fuller catalogue would include parameterized definitions too. Operationalization methods, which organize knowledge about satisficing NFR softgoals, are embedded in architectural designs when selected. For example, an ImplicitFunctionlnvocationRegime (based on [Garlan93]' architecture 3) can be used to hide implementation details in order to make an architectural 358 NON-FUNCTIONAL REQUIREMENTS IN SOFTWARE ENGINEERING design more extensible, thus contributing to one of the softgoals in the above decomposition. Argumentation methods and templates are used to organize principles and guidelines for making design rationale for or against design decisions (Cf. [J. Lee91]).",
"title": ""
},
{
"docid": "2e00e8ee2e5661ca17c621adcea99cb7",
"text": "SCRUM poses key challenges for usability (Baxter et al., 2008). First, product goals are set without an adequate study of the userpsilas needs and context. The user stories selected may not be good enough from the usability perspective. Second, user stories of usability import may not be prioritized high enough. Third, given the fact that a product owner thinks in terms of the minimal marketable set of features in a just-in-time process, it is difficult for the development team to get a holistic view of the desired product or features. This experience report proposes U-SCRUM as a variant of the SCRUM methodology. Unlike typical SCRUM, where at best a team member is responsible for usability, U-SCRUM is based on our experience with having two product owners, one focused on usability and the other on the more conventional functions. Our preliminary result is that U-SCRUM yields improved usability than SCRUM.",
"title": ""
},
{
"docid": "ea87bfc0d6086e367e8950b445529409",
"text": " Queue stability (Chapter 2.1) Scheduling for stability, capacity regions (Chapter 2.3) Linear programs (Chapter 2.3, Chapter 3) Energy optimality (Chapter 3.2) Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6) Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3) Inequality constraints and virtual queues (Chapter 4.4) Drift-plus-penalty algorithm (Chapter 4.5) Performance and delay tradeoffs (Chapter 3.2, 4.5) Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)",
"title": ""
}
] |
[
{
"docid": "f57c0a2965e13a951108310b57723e30",
"text": "Item category has proven to be useful additional information to address the data sparsity and cold start problems in recommender systems. Although categories have been well studied in which they are independent and structured in a flat form, in many real applications, item category is often organized in a richer knowledge structure - category hierarchy, to reflect the inherent correlations among different categories. In this paper, we propose a novel latent factor model by exploiting category hierarchy from the perspectives of both users and items for effective recommendation. Specifically, a user can be influenced by her preferred categories in the hierarchy. Similarly, an item can be characterized by the associated categories in the hierarchy. We incorporate the influence that different categories have towards a user and an item in the hierarchical structure. Experimental results on two real-world data sets demonstrate that our method consistently outperforms the state-of-the-art category-aware recommendation algorithms.",
"title": ""
},
{
"docid": "de9aa1b5c6e61da518e87a55d02c45e9",
"text": "A novel type of dual-mode microstrip bandpass filter using degenerate modes of a meander loop resonator has been developed for miniaturization of high selectivity narrowband microwave bandpass filters. A filter of this type having a 2.5% bandwidth at 1.58 GHz was designed and fabricated. The measured filter performance is presented.",
"title": ""
},
{
"docid": "8e18fa3850177d016a85249555621723",
"text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.",
"title": ""
},
{
"docid": "43ac7e674624615c9906b2bd58b72b7b",
"text": "OBJECTIVE\nTo develop a method enabling human-like, flexible supervisory control via delegation to automation.\n\n\nBACKGROUND\nReal-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance.\n\n\nMETHOD\nWe review problems with static and adaptive (as opposed to \"adaptable\") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a \"level of automation\" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described.\n\n\nRESULTS\nOn the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's \"playbook.\" Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles.\n\n\nCONCLUSION\nDelegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level.\n\n\nAPPLICATION\nMost applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.",
"title": ""
},
{
"docid": "b01fbfbe98960e81359c73009a06f5bb",
"text": "Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained endto-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.",
"title": ""
},
{
"docid": "0a7673d423c9134fb96bb3bb5b286433",
"text": "In this contribution the development, design, fabrication and test of a highly integrated broadband multifunctional chip is presented. The MMIC covers the C-, X-and Ku- Band and it is suitable for applications in high performance Transmit/Receive Modules. In less than 26 mm2, the MMIC embeds several T/R switches, low noise/medium power amplifiers, a stepped phase shifter and analog/digital attenuators in order to perform the RF signal routing and phase/amplitude conditioning. Besides, an embedded serial-to-parallel converter drives the phase shifter and the digital attenuator leading to a reduction in complexity of the digital control interface.",
"title": ""
},
{
"docid": "4e48a19285347f7c731fb5fd9b1bbf06",
"text": "This paper presents a new 360° analog phase shifter for Ku and Ka bands applications. These devices are reflection-type analog phase shifters implemented with a 90° branch-line coupler and a new load concept. Varactor diode is used as the tuning element to change capacitance with voltage. Test circuits have been optimized, in order to have 360° phase shift range with low insertion loss. The measured Ku band phase shifter presents more than 360° phase shift between 11.5 and 12.5 GHz. At 12 GHz the insertion loss is 3.3dB+/−0.5dB. For the Ka band example, the lowest insertion loss is obtained at 30.3 GHz (3.6dB+/−1.8dB) and the phase shift is 315°. In both case, the return loss is always lower than −10dB.",
"title": ""
},
{
"docid": "f9468884fd24ff36b81fc2016a519634",
"text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.",
"title": ""
},
{
"docid": "e567034595d9bb6a236d15b8623efce7",
"text": "In this paper, we use artificial neural networks (ANNs) for voice conversion and exploit the mapping abilities of an ANN model to perform mapping of spectral features of a source speaker to that of a target speaker. A comparative study of voice conversion using an ANN model and the state-of-the-art Gaussian mixture model (GMM) is conducted. The results of voice conversion, evaluated using subjective and objective measures, confirm that an ANN-based VC system performs as good as that of a GMM-based VC system, and the quality of the transformed speech is intelligible and possesses the characteristics of a target speaker. In this paper, we also address the issue of dependency of voice conversion techniques on parallel data between the source and the target speakers. While there have been efforts to use nonparallel data and speaker adaptation techniques, it is important to investigate techniques which capture speaker-specific characteristics of a target speaker, and avoid any need for source speaker's data either for training or for adaptation. In this paper, we propose a voice conversion approach using an ANN model to capture speaker-specific characteristics of a target speaker and demonstrate that such a voice conversion approach can perform monolingual as well as cross-lingual voice conversion of an arbitrary source speaker.",
"title": ""
},
{
"docid": "d63d849c5323cb1c97c15080247982d5",
"text": "Tampereen ammattikorkeakoulu Tampere University of Applied Sciences Degree Programme in International Business JÄRVENSIVU, VEERA: Social Media Marketing Plan for a SME Bachelor's thesis 53 pages, of which appendices 31 pages October 2017 The aim of this bachelor’s thesis was to create an efficient, low-cost social media marketing plan for a small clothing company called Nikitrade. The data gathered for establishing the marketing plan were mainly secondary data consisting of multiple books and articles related to the topic. For qualitative data gathering, interviews and discussions with the company owners were used. Because of the competitive sensitiveness of the subject, the social marketing plan itself is not published. The thesis report includes the important factors for establishing a marketing plan for a small or medium sized enterprise. The marketing plan explains the importance of a thorough analysis of the current situation, both internal and external. It also introduces strategies that should be established in order to create an efficient marketing plan. Lastly, it explains the importance of metrics and measuring the success of reaching the objectives. For discussion, the author of the thesis has gathered the key points of the social media marketing plan she has created. The most important issues of social media marketing are staying consistent in activity, quality and visuals.",
"title": ""
},
{
"docid": "354500ae7e1ad1c6fd09438b26e70cb0",
"text": "Dietary exposures can have consequences for health years or decades later and this raises questions about the mechanisms through which such exposures are 'remembered' and how they result in altered disease risk. There is growing evidence that epigenetic mechanisms may mediate the effects of nutrition and may be causal for the development of common complex (or chronic) diseases. Epigenetics encompasses changes to marks on the genome (and associated cellular machinery) that are copied from one cell generation to the next, which may alter gene expression, but which do not involve changes in the primary DNA sequence. These include three distinct, but closely inter-acting, mechanisms including DNA methylation, histone modifications and non-coding microRNAs (miRNA) which, together, are responsible for regulating gene expression not only during cellular differentiation in embryonic and foetal development but also throughout the life-course. This review summarizes the growing evidence that numerous dietary factors, including micronutrients and non-nutrient dietary components such as genistein and polyphenols, can modify epigenetic marks. In some cases, for example, effects of altered dietary supply of methyl donors on DNA methylation, there are plausible explanations for the observed epigenetic changes, but to a large extent, the mechanisms responsible for diet-epigenome-health relationships remain to be discovered. In addition, relatively little is known about which epigenomic marks are most labile in response to dietary exposures. Given the plasticity of epigenetic marks and their responsiveness to dietary factors, there is potential for the development of epigenetic marks as biomarkers of health for use in intervention studies.",
"title": ""
},
{
"docid": "3cf458392fb61a5e70647c9c951d5db8",
"text": "This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction.",
"title": ""
},
{
"docid": "d74299248da9cb4238118ad4533d5d99",
"text": "The plug-in hybrid electric vehicles (PHEVs) are specialized hybrid electric vehicles that have the potential to obtain enough energy for average daily commuting from batteries. The PHEV battery would be recharged from the power grid at home or at work and would thus allow for a reduction in the overall fuel consumption. This paper proposes an integrated power electronics interface for PHEVs, which consists of a novel Eight-Switch Inverter (ESI) and an interleaved DC/DC converter, in order to reduce the cost, the mass and the size of the power electronics unit (PEU) with high performance at any operating mode. In the proposed configuration, a novel Eight-Switch Inverter (ESI) is able to function as a bidirectional single-phase AC/DC battery charger/ vehicle to grid (V2G) and to transfer electrical energy between the DC-link (connected to the battery) and the electric traction system as DC/AC inverter. In addition, a bidirectional-interleaved DC/DC converter with dual-loop controller is proposed for interfacing the ESI to a low-voltage battery pack in order to minimize the ripple of the battery current and to improve the efficiency of the DC system with lower inductor size. To validate the performance of the proposed configuration, the indirect field-oriented control (IFOC) based on particle swarm optimization (PSO) is proposed to optimize the efficiency of the AC drive system in PHEVs. The maximum efficiency of the motor is obtained by the evaluation of optimal rotor flux at any operating point, where the PSO is applied to evaluate the optimal flux. Moreover, an improved AC/DC controller based Proportional-Resonant Control (PRC) is proposed in order to reduce the THD of the input current in charger/V2G modes. The proposed configuration is analyzed and its performance is validated using simulated results obtained in MATLAB/ SIMULINK. Furthermore, it is experimentally validated with results obtained from the prototypes that have been developed and built in the laboratory based on TMS320F2808 DSP.",
"title": ""
},
{
"docid": "556013f32d362413ca54483f75dd401c",
"text": "Existing shape-from-shading algorithms assume constant reflectance across the shaded surface. Multi-colored surfaces are excluded because both shading and reflectance affect the measured image intensity. Given a standard RGB color image, we describe a method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface. Of course, shading recovery is closely tied to lightness recovery and our method follows from the work of Land [10, 9], Horn [7] and Blake [1]. In the luminance image, R+G+B, shading and reflectance are confounded. Reflectance changes are located and removed from the luminance image by thresholding the gradient of its logarithm at locations of abrupt chromaticity change. Thresholding can lead to gradient fields which are not conservative (do not have zero curl everywhere and are not integrable) and therefore do not represent realizable shading fields. By applying a new curl-correction technique at the thresholded locations, the thresholding is improved and the gradient fields are forced to be conservative. The resulting Poisson equation is solved directly by the Fourier transform method. Experiments with real images are presented.",
"title": ""
},
{
"docid": "8a695d5913c3b87fb21864c0bdd3d522",
"text": "Environmental topics have gained much consideration in corporate green operations. Globalization, stakeholder pressures, and stricter environmental regulations have made organizations develop environmental practices. Thus, green supply chain management (GSCM) is now a proactive approach for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances. DEMATEL evaluates GSCM practices to find the main practices to improve both environmental and economic performances. This study uses intuitionistic fuzzy set theory to handle the linguistic imprecision and the ambiguity of human being’s judgment. A case study from the automotive industry is presented to evaluate the efficiency of the proposed method. The results reveal ‘‘internal management support’’, ‘‘green purchasing’’ and ‘‘ISO 14001 certification’’ are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible, while improving their economic and environmental performance goals. Further, a sensitivity analysis of results, managerial implications, conclusions, limitations and future research opportunities are provided. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "15c805c71f822f8e12d7f12f321f7844",
"text": "The movement pattern of mobile users plays an important role in performance analysis of wireless computer and communication networks. In this paper, we first give an overview and classification of mobility models used for simulation-based studies. Then, we present an enhanced random mobility model, which makes the movement trace of mobile stations more realistic than common approaches for random mobility. Our movement concept is based on random processes for speed and direction control in which the new values are correlated to previous ones. Upon a speed change event, a new target speed is chosen, and an acceleration is set to achieve this target speed. The principles for direction changes are similar. Finally, we discuss strategies for the stations' border behavior (i.e., what happens when nodes move out of the simulation area) and show the effects of certain border behaviors and mobility models on the spatial user distribution.",
"title": ""
},
{
"docid": "5859379f3c4c5a7186c9dc8c85e1e384",
"text": "Purpose – Investigate the use of two imaging-based methods – coded pattern projection and laser-based triangulation – to generate 3D models as input to a rapid prototyping pipeline. Design/methodology/approach – Discusses structured lighting technologies as suitable imaging-based methods. Two approaches, coded-pattern projection and laser-based triangulation, are specifically identified and discussed in detail. Two commercial systems are used to generate experimental results. These systems include the Genex Technologies 3D FaceCam and the Integrated Vision Products Ranger System. Findings – Presents 3D reconstructions of objects from each of the commercial systems. Research limitations/implications – Provides background in imaging-based methods for 3D data collection and model generation. A practical limitation is that imaging-based systems do not currently meet accuracy requirements, but continued improvements in imaging systems will minimize this limitation. Practical implications – Imaging-based approaches to 3D model generation offer potential to increase scanning time and reduce scanning complexity. Originality/value – Introduces imaging-based concepts to the rapid prototyping pipeline.",
"title": ""
},
{
"docid": "8c0cbfc060b3a6aa03fd8305baf06880",
"text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.",
"title": ""
},
{
"docid": "7233197435b777dcd07a2c66be32dea9",
"text": "We present an automated assembly system that directs the actions of a team of heterogeneous robots in the completion of an assembly task. From an initial user-supplied geometric specification, the system applies reasoning about the geometry of individual parts in order to deduce how they fit together. The task is then automatically transformed to a symbolic description of the assembly-a sort of blueprint. A symbolic planner generates an assembly sequence that can be executed by a team of collaborating robots. Each robot fulfills one of two roles: parts delivery or parts assembly. The latter are equipped with specialized tools to aid in the assembly process. Additionally, the robots engage in coordinated co-manipulation of large, heavy assemblies. We provide details of an example furniture kit assembled by the system.",
"title": ""
},
{
"docid": "c6ab3d07e068637082b88160ca2f4988",
"text": "This paper focuses on the design of a real-time particle-swarm-optimization-based proportional-integral-differential (PSO-PID) control scheme for the levitated balancing and propulsive positioning of a magnetic-levitation (maglev) transportation system. The dynamic model of a maglev transportation system, including levitated electromagnets and a propulsive linear induction motor based on the concepts of mechanical geometry and motion dynamics, is first constructed. The control objective is to design a real-time PID control methodology via PSO gain selections and to directly ensure the stability of the controlled system without the requirement of strict constraints, detailed system information, and auxiliary compensated controllers despite the existence of uncertainties. The effectiveness of the proposed PSO-PID control scheme for the maglev transportation system is verified by numerical simulations and experimental results, and its superiority is indicated in comparison with PSO-PID in previous literature and conventional sliding-mode (SM) control strategies. With the proposed PSO-PID control scheme, the controlled maglev transportation system possesses the advantages of favorable control performance without chattering phenomena in SM control and robustness to uncertainties superior to fixed-gain PSO-PID control.",
"title": ""
}
] |
scidocsrr
|
ea382900de95bf3756a81eb30c3a8e1c
|
In Code We Trust? - Measuring the Control Flow Immutability of All Smart Contracts Deployed on Ethereum
|
[
{
"docid": "c428c35e7bd0a2043df26d5e2995f8eb",
"text": "Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show that when a script execution requires nontrivial computation effort, practical attacks exist which either waste miners' computational resources or lead miners to accept incorrect script results. These attacks drive miners to an ill-fated choice, which we call the verifier's dilemma, whereby rational miners are well-incentivized to accept unvalidated blockchains. We call the framework of computation through a scriptable cryptocurrency a consensus computer and develop a model that captures incentives for verifying computation in it. We propose a resolution to the verifier's dilemma which incentivizes correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify. Finally we discuss two distinct, practical implementations of our consensus computer in real cryptocurrency networks like Ethereum.",
"title": ""
},
{
"docid": "68c1a1fdd476d04b936eafa1f0bc6d22",
"text": "Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.",
"title": ""
}
] |
[
{
"docid": "49740b1faa60a212297926fec63de0ce",
"text": "In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problemempirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children’s fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a na ı̈ve baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions.",
"title": ""
},
{
"docid": "1197bc22d825a53c2b9e6ff068e10353",
"text": "CONTEXT\nPermanent evaluation of end-user satisfaction and continuance intention is a critical issue at each phase of a clinical information system (CIS) project, but most validation studies are concerned with the pre- or early post-adoption phases.\n\n\nOBJECTIVE\nThe purpose of this study was twofold: to validate at the Pompidou University Hospital (HEGP) an information technology late post-adoption model built from four validated models and to propose a unified metamodel of evaluation that could be adapted to each context or deployment phase of a CIS project.\n\n\nMETHODS\nFive dimensions, i.e., CIS quality (CISQ), perceived usefulness (PU), confirmation of expectations (CE), user satisfaction (SAT), and continuance intention (CI) were selected to constitute the CI evaluation model. The validity of the model was tested using the combined answers to four surveys performed between 2011 and 2015, i.e., more than ten years after the opening of HEGP in July 2000. Structural equation modeling was used to test the eight model-associated hypotheses.\n\n\nRESULTS\nThe multi-professional study group of 571 responders consisted of 158 doctors, 282 nurses, and 131 secretaries. The evaluation model accounted for 84% of variance of satisfaction and 53% of CI variance for the period 2011-2015 and for 92% and 69% for the period 2014-2015. In very late post adoption, CISQ appears to be the major determinant of satisfaction and CI. Combining the results obtained at various phases of CIS deployment, a Unified Model of Information System Continuance (UMISC) is proposed.\n\n\nCONCLUSION\nIn a meaningful CIS use situation at HEGP, this study confirms the importance of CISQ in explaining satisfaction and CI. The proposed UMISC model that can be adapted to each phase of CIS deployment could facilitate the necessary efforts of permanent CIS acceptance and continuance evaluation.",
"title": ""
},
{
"docid": "ea544860e3c8d8b154985af822c4a9ea",
"text": "Learning to walk over a graph towards a target node for a given input query and a source node is an important problem in applications such as knowledge base completion (KBC). It can be formulated as a reinforcement learning (RL) problem with a known state transition model. To overcome the challenge of sparse reward, we develop a graph-walking agent called M-Walk, which consists of a deep recurrent neural network (RNN) and Monte Carlo Tree Search (MCTS). The RNN encodes the state (i.e., history of the walked path) and maps it separately to a policy, a state value and state-action Q-values. In order to effectively train the agent from sparse reward, we combine MCTS with the neural policy to generate trajectories yielding more positive rewards. From these trajectories, the network is improved in an off-policy manner using Q-learning, which modifies the RNN policy via parameter sharing. Our proposed RL algorithm repeatedly applies this policy-improvement step to learn the entire model. At test time, MCTS is again combined with the neural policy to predict the target node. Experimental results on several graph-walking benchmarks show that M-Walk is able to learn better policies than other RL-based methods, which are mainly based on policy gradients. M-Walk also outperforms traditional KBC baselines.",
"title": ""
},
{
"docid": "35ecaec94dc35044fd7d406ba3f0b1db",
"text": "In earlier papers the author has presented evidence for the development during the 14th century B.C. of both the 260-day ritual almanac and the 365-day secular calendar at the Formative site of Izapa near the Pacific coast of southern Mexico. In this paper he traces the spatial and temporal diffusion of these calendrical systems throughout Mesoamerica using archaeoastronomic, architectural, and geomantic data. He identifies a center of major calendrical innovation at Edzná in the Yucatán and demonstrates the importance of the calendar in city-planning, both among the Maya and on the Mexican plateau. * * * * * * * * * * * * * * * * * * * * * As the author has demonstrated in an earlier paper, the convergence of astronomical, geographical, and historical evidence strongly suggests that the 260-day ritual almanac which was in use throughout Mesoamerica in pre-Columbian times was developed at the large Formative site of Izapa near the Pacific coast of southern Mexico. (1) In support of his contention that the ritual almanac had an astronomic origin, he has shown that only at the latitude of Izapa (14*45' N.) can a 260-day interval be measured between zenithal sun positions, and that such an interval can only be measured commencing on August 13 -a day which the Maya commemorated as 'the beginning of the world', according to the Goodman-Martinez-Thompson correlation. His geographic argument centers on the fact that Izapa is the only archaeological site located on this parallel which is likewise situated in a lowland tropical environment where animals such ås the alligator, monkey, and iguana are found -all of which were used as day-names in the ritual almanac. And finally, he has pointed out that Izapa is the only archaeological site located at this latitude which wou.1d have been in existence early enough to have served as the birthplace of the ritual almanac, that is, by about 1500 B.C. While carrying out field-work at Izapa, the author found additional evidence -admittedly circumstantial -that the 365-day secular calendar, which was also used in pre-Columbian Mesoamerica, could likewise have been devised at the same ceremonial center. At this site the true length of the solar year could have been determined simply by counting the number of days that elapse between successive sunrises over Tajumulco, the highest volcano in Central America -an event that, as seen from the main pyramid, takes place each year on the summer solstice. Subsequent investigations revealed that more than thirty of the major ceremonial centers of Mesoamerica appear to have been located according to the same principle of solsticial orientation, that is, they are situated directly in line with a sunrise or sunset position over the highest topographic feature within sight on either June 21 or December 22. (2) Although several of the oldest ceremonial centers in Mesoamerica demonstrate such an alignment, including the Olmec sites of San Lorenzo and La Venta which have been dated to 1200 and 1000 B.C. respectively, it can, of course, be argued that this geomantic principle was first developed in the Gulf coastal plain of Mexico, as the Olmec culture itself is supposed to have done. Although the author believes that a stronger case can be made for the disccvery of this principle at Izapa, where the topographic marker in question is scarcely 30 kilometers away than at either San Lorenzo or La Venta, where the mountains to which they are oriented lie at a distance greater than 120 kilometers, he recognized that more convincing evidence for the origins of the calendrical systems at Izapa would be required before his hypothesis would win general acceptance. That evidence now seems to have been found in the astronomic orientation of key structures in all of the major pre-Columbian ceremonial centers. From the Valley of Mexico in the north and west to the Yucatán and Petén regions of the south and east, the alignments of buildings erected by such diverse cultures as the Olmecs, Zapotecs, Mixtecs, and Mayas all demonstrate a common azimuth approximating 285o. Throughout the entire Mesoamerican realm this azimuth corresponds to the sunset position on August 13 -a date that has astronomic significance only at the latitude of Izapa. That such a date would have been commemorated anywhere else in Mesoamerica meant that its meaning would have to have been known, namely that it marked “the beginning of the world”; that it could have been commemorated meant that a 'formula' for its determination would have to have been understood, namely counting fifty-two days following the summer solstice. Thus, both the knowledge of this date and the mechanism for calculating it, appear to represent innovations whose geographic diffusion outward from Izapa was susceptible to reconstruction. It is the result of this reconstruction that the author now wishes to present. Using clues supplied both by the Maya themselves and early Spanish chroniclers such as Bishop Landa of Yucatán, the author first sought to reconstruct the chronology of calendrical innovation in Mesoamerica. Although the detailed results are published elsewhere, (3) the critical dates may be summarized as follows: (1) the creation of the ritual almanac at Izapa about 1358 B.C.; (2) the determination of the length of the solar year and the invention of the secular calendar, also at Izapa, about 1323 B.C.; (3) the development of the so-called “Long Count\" -a means of meshing the two calendars for reasons of precision, most probably at Izapa in 235 B.C.; and (4) the shift of the Maya New Year`s,Day to July 26, at a site in the Yucatán, about the year 40 A.D. Although the great antiquity of calendrical innovation in Mesoamerica may at first glance seem surprising, recent excavations near Izapa carried out by Lowe and his associates suggest that a major change in dietary patterns took place in that region shortly after 1400 B.C. -a shift from a dependence on manioc to maize. (4 ) Indeed, it may have been that the organized cultivation of maize prompted calendrical experimentation, for unlike a root crop, some knowledge of the seasonality of precipitation patterns is necessary to secure the success of the harvest. In any case, Coe also argues for the 'high antiquity' of the calendrical systems in Mesoamerica, on the basis of their consistency of symbolism and internal ordering, (5) and believes that the ritual almanac was already in use at the time of the founding of the oldest Olmec ceremonial center of San Lorenzo in 1200 B.C. (6) However, evidence that peoples elsewhere than at Izapa were celebrating August 13 seems first to appear at La Venta, a site dated to 1000 B.C. which has been termed 'the capital of the Olmecs'. Although the principal axis of La Venta runs 8o west of north, the so-called \"Sterling Complex\", which forms the southern nucleus of the ceremonial center, is oriented 23o away from this line, i.e., 15o east of north. Thus, the assemblage of structures in this complex, all being perpendicular to the latter axis, demonstrate an alignment squarely toward an azimuth of 285o. (7) That calendrical innovations from Izapa should have reached San Lorenzo by 1200 B.C. or La Venta by 1000 B.C. is hardly surprising. Although both of them are situated in the Gulf coastal plain, they have ready access to the Pacific through the Tehuantepec Gap, the lowest inter-ocean corridor in all of Mesoamerica. Indeed, diffusion across the isthmus would have meant simply following the 'line of least resistance' for any innovation originating in Izapa. By the sixth century B.C. the calendar seems to have penetrated into the mountains of southern Mexico, no doubt following the valley of the Tehuantepec River. In any case, Monte Albán, the large and impressive Zapotec site which overlooks the modern city of Oaxaca, demonstrates the oldest calendrical inscriptions ever found in Mesoamerica -a fact which led the Mexican archaeologist Alfonso Caso to conclude that the calendar had indeed been invented there. Despite the antiquity of its inscriptions, however, Monte Albán satisfies none of the requisite conditions for having served as the calendar's birthplace. Moreover, one of its oldest and least altered structures -a building known as Mound Y -is pointedly aligned toward the western horizon at an azimuth of 285o, in contrast to the assemblage of buildings surrounding the Great Plaza, all of which have been rebuilt one or more times and today adhere to a rigid orientation that is essentially north-south. Some 100 kilometers to the northwest of Monte Albán lies the ancient Mixtec capital of Huamelulpan. Although it is a far more modest site that the Zapotec capital, Huamelulpan provides clear evidence in its magnificent calendrical inscriptions that the ritual almanac was already in use among the Mixtec peoples by 300 B.C. Similarly, the orientation of its main pyramid to an azimuth of 285o suggests that the significance of August 13 was known and commemorated in the rugged Sierra Madre del Sur by this time. Moreover, the fact that its calendrical inscriptions -like those of Monte Albán -employ only the ritual almanac and not the Long Count lends further support to the author’s contention that the latter was first developed in 235 B.C., and then most probably at Izapa. By about the time of the birth of Christ, the calendars appear to have reached both the Mexican plateau in the north and the base of the Yucatán peninsula in the east. In both of these areas there is dramatic evidence of the interplay of astronomic, calendric, and religious factors in the layout and design not only of major architectural structures but also of entire cities or ceremonial centers. Let us look first at the Valley of Mexico where the greatest metropolis in pre-Columbian America arose. Located some 50 kilometers to the northeast of Mexico City, Teotihuacán is estimated to have been one of the three most populous cities in the world in its day (100 B.C",
"title": ""
},
{
"docid": "dd2aa47708d170d2126116e560e9f520",
"text": "The comorbidity of current and lifetime DSM-IV anxiety and mood disorders was examined in 1,127 outpatients who were assessed with the Anxiety Disorders Interview Schedule for DSM-IV: Lifetime version (ADIS-IV-L). The current and lifetime prevalence of additional Axis I disorders in principal anxiety and mood disorders was found to be 57% and 81%, respectively. The principal diagnostic categories associated with the highest comorbidity rates were mood disorders, posttraumatic stress disorder (PTSD), and generalized anxiety disorder (GAD). A high rate of lifetime comorbidity was found between the anxiety and mood disorders; the lifetime association with mood disorders was particularly strong for PTSD, GAD, obsessive-compulsive disorder, and social phobia. The findings are discussed in regard to their implications for the classification of emotional disorders.",
"title": ""
},
{
"docid": "74381f9602374af5ad0775a69163d1b9",
"text": "This paper discusses some of the basic formulation issues and solution procedures for solving oneand twodimensional cutting stock problems. Linear programming, sequential heuristic and hybrid solution procedures are described. For two-dimensional cutting stock problems with rectangular shapes, we also propose an approach for solving large problems with limits on the number of times an ordered size may appear in a pattern.",
"title": ""
},
{
"docid": "f9afdab6f3cac70d6680b02b32f37b49",
"text": "Marx generators can produce high voltage pulses using multiple identical stages that operate at a fraction of the total output voltage, without the need for a step-up transformer that limits the pulse risetimes and lowers the efficiency of the system. Each Marx stage includes a capacitor or pulse forming network, and a high voltage switch. Typically, these switches are spark gaps resulting in Marx generators with low repetition rates and limited lifetimes. The development of economical, compact, high voltage, high di/dt, and fast turn-on solid-state switches make it easy to build economical, long lifetime, high voltage Marx generators capable of high pulse repetition rates. We have constructed a Marx generator using our 24 kV thyristor based switches, which are capable of conducting 14 kA peak currents with ringing discharges at >25 kA/mus rate of current risetimes. The switches have short turn-on delays, less than 200 ns, low timing jitters, and are triggered by a single 10 V isolated trigger pulse. This paper will include a description of a 4-stage solid-state Marx and triggering system, as well as show data from operation at 15 kV charging voltage. The Marx was used to drive a one-stage argon ion accelerator",
"title": ""
},
{
"docid": "f3a8fa7b4c6ac7a6218a0b8aa5a8f4b2",
"text": "Give us 5 minutes and we will show you the best book to read today. This is it, the uncertainty quantification theory implementation and applications that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read.",
"title": ""
},
{
"docid": "6b52cc8055bd565e1f04095da8a7a5e9",
"text": "This study examined the effect of lifelong bilingualism on maintaining cognitive functioning and delaying the onset of symptoms of dementia in old age. The sample was selected from the records of 228 patients referred to a Memory Clinic with cognitive complaints. The final sample consisted of 184 patients diagnosed with dementia, 51% of whom were bilingual. The bilinguals showed symptoms of dementia 4 years later than monolinguals, all other measures being equivalent. Additionally, the rate of decline in Mini-Mental State Examination (MMSE) scores over the 4 years subsequent to the diagnosis was the same for a subset of patients in the two groups, suggesting a shift in onset age with no change in rate of progression.",
"title": ""
},
{
"docid": "0ff8c4799b62c70ef6b7d70640f1a931",
"text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "9379523ea300bd07d0e26242f692948a",
"text": "There has been a growing interest in recent years in the poten tial use of product differentiation (through eco-type labelling) as a means of promoting and rewarding the sustainable management and exploitation of fish stocks. This interest is marked by the growing literature on the topic, exploring both the concept and the key issues associated with it. It reflects a frustration among certain groups with the supply-side measures currently employed in fisheries management, which on their own have proven insufficient to counter the negative incentive structures characterising open-a ccess fisheries. The potential encapsulated by product differentiation has, however, yet to be tested in the market place. One of the debates that continues to accompany the concept is the nature and extent of the response of consumers to the introduction of labelled seafood products. Though differentiated seafood products are starting to come onto the market, we are still essentially dealing with a hypothetical market situation in terms of analysing consumer behaviour. Moving the debate from theoretical extrapolation to one of empirical evidence, this paper presents the preliminary empirical results of a study undertaken in the UK. The study aimed, amongst other things, to evaluate whether UK consumers are prepared to pay a premium for seafood products that are differentiated on the grounds that the fish is either of (a) high quality or (b) comes from a sustainably managed fishery. It also aimed to establish whether the quantity of fish products purchased would change. The results are presented in this paper.",
"title": ""
},
{
"docid": "1b4eb25d20cd2ca431c2b73588021086",
"text": "Machine rule induction was examined on a difficult categorization problem by applying a Holland-style classifier system to a complex letter recognition task. A set of 20,000 unique letter images was generated by randomly distorting pixel images of the 26 uppercase letters from 20 different commercial fonts. The parent fonts represented a full range of character types including script, italic, serif, and Gothic. The features of each of the 20,000 characters were summarized in terms of 16 primitive numerical attributes. Our research focused on machine induction techniques for generating IF-THEN classifiers in which the IF part was a list of values for each of the 16 attributes and the THEN part was the correct category, i.e., one of the 26 letters of the alphabet. We examined the effects of different procedures for encoding attributes, deriving new rules, and apportioning credit among the rules. Binary and Gray-code attribute encodings that required exact matches for rule activation were compared with integer representations that employed fuzzy matching for rule activation. Random and genetic methods for rule creation were compared with instance-based generalization. The strength/specificity method for credit apportionment was compared with a procedure we call “accuracy/utility.”",
"title": ""
},
{
"docid": "f226373b0ef1cbdfcdddbb978984eb75",
"text": "The series-parallel resonant converter and its magnetic components are analysed by fundamental frequency techniques. The proposed method allows an easy design of the integrated transformer and the calculation of the losses of all components of the converter. Thus, an optimisation of the whole converter system – minimising either the volume and/or the losses of the converter (upper temperature is restricted) – could be done based on the derived set of equations. Moreover, the equations also include a ZCS condition for one leg of the converter, the control of the converter by the frequency and duty cycle and a load with offset voltage. The proposed analytic model is validated by simulation as well as by experimental results.",
"title": ""
},
{
"docid": "5bf172cfc7d7de0c82707889cf722ab2",
"text": "The concept of a decentralized ledger usually implies that each node of a blockchain network stores the entire blockchain. However, in the case of popular blockchains, which each weigh several hundreds of GB, the large amount of data to be stored can incite new or low-capacity nodes to run lightweight clients. Such nodes do not participate to the global storage effort and can result in a centralization of the blockchain by very few nodes, which is contrary to the basic concepts of a blockchain. To avoid this problem, we propose new low storage nodes that store a reduced amount of data generated from the blockchain by using erasure codes. The properties of this technique ensure that any block of the chain can be easily rebuilt from a small number of such nodes. This system should encourage low storage nodes to contribute to the storage of the blockchain and to maintain decentralization despite of a globally increasing size of the blockchain. This system paves the way to new types of blockchains which would only be managed by low capacity nodes.",
"title": ""
},
{
"docid": "b773df87bf97191a8dd33bd81a7ee2e5",
"text": "We consider the problem of recommending comment-worthy articles such as news and blog-posts. An article is defined to be comment-worthy for a particular user if that user is interested to leave a comment on it. We note that recommending comment-worthy articles calls for elicitation of commenting-interests of the user from the content of both the articles and the past comments made by users. We thus propose to develop content-driven user profiles to elicit these latent interests of users in commenting and use them to recommend articles for future commenting. The difficulty of modeling comment content and the varied nature of users' commenting interests make the problem technically challenging. The problem of recommending comment-worthy articles is resolved by leveraging article and comment content through topic modeling and the co-commenting pattern of users through collaborative filtering, combined within a novel hierarchical Bayesian modeling approach. Our solution, Collaborative Correspondence Topic Models (CCTM), generates user profiles which are leveraged to provide a personalized ranking of comment-worthy articles for each user. Through these content-driven user profiles, CCTM effectively handle the ubiquitous problem of cold-start without relying on additional meta-data. The inference problem for the model is intractable with no off-the-shelf solution and we develop an efficient Monte Carlo EM algorithm. CCTM is evaluated on three real world data-sets, crawled from two blogs, ArsTechnica (AT) Gadgets (102,087 comments) and AT-Science (71,640 comments), and a news site, DailyMail (33,500 comments). We show average improvement of 14% (warm-start) and 18% (cold-start) in AUC, and 80% (warm-start) and 250% (cold-start) in Hit-Rank@5, over state of the art.",
"title": ""
},
{
"docid": "865fb9e209de0b277c8f1b007b0f5cbf",
"text": "Sentiment classification on Twitter has attracted increasing research in recent years. Most existing work focuses on feature engineering according to the tweet content itself. In this paper, we propose a contextbased neural network model for Twitter sentiment analysis, incorporating contextualized features from relevant Tweets into the model in the form of word embedding vectors. Experiments on both balanced and unbalanced datasets show that our proposed models outperform the current state-of-the-art.",
"title": ""
},
{
"docid": "5a601e08824185bafeb94ac432b6e92e",
"text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",
"title": ""
},
{
"docid": "b5df9d2964655e89a70adbc5d5214cd7",
"text": "This paper focuses on a thorough comparison of the two main hardware targets for real-time optimization of a computer vision algorithm: GPU and FPGA. Based on a complex case study algorithm for threaded isle detection, implementation on both hardware targets is compared in terms of resulting time performance, code translation effort, hardware cost, power efficiency and integrateability. A real-life case study as described in this paper is a very useful addition to discussions on a more theoretical level, going beyond artificial experiments. In our experiments, we show the speed-up gained by porting our algorithm to FPGA using manually written VHDL and to a heterogeneous GPU/CPU architecture with the OpenCL language. Also, issues and problems occurring during the code porting are detailed.",
"title": ""
},
{
"docid": "d0565bcb93ab719ac1f36e2d8c9dd919",
"text": "Heterogeneity among rivals implies that each firm faces a unique competitive set, despite overlapping market domains. This suggests the utility of a firm-level approach to competitor identification and analysis, particularly under dynamic environmental conditions. We take such an approach in developing a market-based and resource-based framework for scanning complex competitive fields. By facilitating a search for functional similarities among products and resources, the framework reveals relevant commonalities in an otherwise heterogeneous competitive set. Beyond its practical contribution, the paper also advances resource-based theory as a theory of competitive advantage. Most notably, we show that resource substitution conditions not only the sustainability of a competitive advantage, but the attainment of competitive advantage as well. With equifinality among resources of different types, the rareness condition for even temporary competitive advantage must include resource substitutes. It is not rareness in terms of resource type that matters, but rareness in terms of resource functionality. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
}
] |
scidocsrr
|
c17846ea6c9c2f0ac8c1637b7c103d60
|
Haptic feedback in mixed-reality environment
|
[
{
"docid": "d2f36cc750703f5bbec2ea3ef4542902",
"text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …",
"title": ""
}
] |
[
{
"docid": "b5e0faba5be394523d10a130289514c2",
"text": "Child neglect results from either acts of omission or of commission. Fatalities from neglect account for 30% to 40% of deaths caused by child maltreatment. Deaths may occur from failure to provide the basic needs of infancy such as food or medical care. Medical care may also be withheld because of parental religious beliefs. Inadequate supervision may contribute to a child's injury or death through adverse events involving drowning, fires, and firearms. Recognizing the factors contributing to a child's death is facilitated by the action of multidisciplinary child death review teams. As with other forms of child maltreatment, prevention and early intervention strategies are needed to minimize the risk of injury and death to children.",
"title": ""
},
{
"docid": "36f960b37e7478d8ce9d41d61195f83a",
"text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.",
"title": ""
},
{
"docid": "01b73e9e8dbaf360baad38b63e5eae82",
"text": "Received: 29 September 2009 Revised: 19 April 2010 2nd Revision: 5 July 2010 3rd Revision: 30 November 2010 Accepted: 8 December 2010 Abstract Throughout the world, sensitive personal information is now protected by regulatory requirements that have translated into significant new compliance oversight responsibilities for IT managers who have a legal mandate to ensure that individual employees are adequately prepared and motivated to observe policies and procedures designed to ensure compliance. This research project investigates the antecedents of information privacy policy compliance efficacy by individuals. Using Health Insurance Portability and Accountability Act compliance within the healthcare industry as a practical proxy for general organizational privacy policy compliance, the results of this survey of 234 healthcare professionals indicate that certain social conditions within the organizational setting (referred to as external cues and comprising situational support, verbal persuasion, and vicarious experience) contribute to an informal learning process. This process is distinct from the formal compliance training procedures and is shown to influence employee perceptions of efficacy to engage in compliance activities, which contributes to behavioural intention to comply with information privacy policies. Implications for managers and researchers are discussed. European Journal of Information Systems (2011) 20, 267–284. doi:10.1057/ejis.2010.72; published online 25 January 2011",
"title": ""
},
{
"docid": "42bf428e3c6a4b3c4cb46a2735de872d",
"text": "We have developed a low cost software radio based platform for monitoring EPC Gen 2 RFID traffic. The Gen 2 standard allows for a range of PHY layer configurations and does not specify exactly how to compose protocol messages to inventory tags. This has made it difficult to know how well the standard works, and how it is implemented in practice. Our platform provides much needed visibility into Gen 2 systems by capturing reader transmissions using the USRP2 and decoding them in real-time using software we have developed and released to the public. In essence, our platform delivers much of the functionality of expensive (< $50,000) conformance testing products, with greater extensibility at a small fraction of the cost. In this paper, we present the design and implementation of the platform and evaluate its effectiveness, showing that it has better than 99% accuracy up to 3 meters. We then use the platform to study a commercial RFID reader, showing how the Gen 2 standard is realized, and indicate avenues for research at both the PHY and MAC layers.",
"title": ""
},
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "d3dde75d07ad4ed79ff1da2c3a601e1d",
"text": "In open trials, 1-Hz repetitive transcranial magnetic stimulation (rTMS) to the supplementary motor area (SMA) improved symptoms and normalized cortical hyper-excitability of patients with obsessive-compulsive disorder (OCD). Here we present the results of a randomized sham-controlled double-blind study. Medication-resistant OCD patients (n=21) were assigned 4 wk either active or sham rTMS to the SMA bilaterally. rTMS parameters consisted of 1200 pulses/d, at 1 Hz and 100% of motor threshold (MT). Eighteen patients completed the study. Response to treatment was defined as a > or = 25% decrease on the Yale-Brown Obsessive Compulsive Scale (YBOCS). Non-responders to sham and responders to active or sham rTMS were offered four additional weeks of open active rTMS. After 4 wk, the response rate in the completer sample was 67% (6/9) with active and 22% (2/9) with sham rTMS. At 4 wk, patients receiving active rTMS showed on average a 25% reduction in the YBOCS compared to a 12% reduction in those receiving sham. In those who received 8-wk active rTMS, OCD symptoms improved from 28.2+/-5.8 to 14.5+/-3.6. In patients randomized to active rTMS, MT measures on the right hemisphere increased significantly over time. At the end of 4-wk rTMS the abnormal hemispheric laterality found in the group randomized to active rTMS normalized. The results of the first randomized sham-controlled trial of SMA stimulation in the treatment of resistant OCD support further investigation into the potential therapeutic applications of rTMS in this disabling condition.",
"title": ""
},
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "df1c6a5325dae7159b5bdf5dae65046d",
"text": "Researchers from a wide range of management areas agree that conflicts are an important part of organizational life and that their study is important. Yet, interpersonal conflict is a neglected topic in information system development (ISD). Based on definitional properties of interpersonal conflict identified in the management and organizational behavior literatures, this paper presents a model of how individuals participating in ISD projects perceive conflict and its influence on ISD outcomes. Questionnaire data was obtained from 265 IS staff (main sample) and 272 users (confirmatory sample) working on 162 ISD projects. Results indicated that the construct of interpersonal conflict was reflected by three key dimensions: disagreement, interference, and negative emotion. While conflict management was found to have positive effects on ISD outcomes, it did not substantially mitigate the negative effects of interpersonal conflict on these outcomes. In other words, the impact of interpersonal conflict was perceived to be negative, regardless of how it was managed or resolved.",
"title": ""
},
{
"docid": "5ef49933bc344b76907c271bd832cff0",
"text": "Because music conveys and evokes feelings, a wealth of research has been performed on music emotion recognition. Previous research has shown that musical mood is linked to features based on rhythm, timbre, spectrum and lyrics. For example, sad music correlates with slow tempo, while happy music is generally faster. However, only limited success has been obtained in learning automatic classifiers of emotion in music. In this paper, we collect a ground truth data set of 2904 songs that have been tagged with one of the four words “happy”, “sad”, “angry” and “relaxed”, on the Last.FM web site. An excerpt of the audio is then retrieved from 7Digital.com, and various sets of audio features are extracted using standard algorithms. Two classifiers are trained using support vector machines with the polynomial and radial basis function kernels, and these are tested with 10-fold cross validation. Our results show that spectral features outperform those based on rhythm, dynamics, and, to a lesser extent, harmony. We also find that the polynomial kernel gives better results than the radial basis function, and that the fusion of different feature sets does not always lead to improved classification.",
"title": ""
},
{
"docid": "289b94393191d793e3f4b79787f61d7d",
"text": "Plug-in hybrid electric vehicles (PHEVs) will play a vital role in future sustainable transportation systems due to their potential in terms of energy security, decreased environmental impact, improved fuel economy, and better performance. Moreover, new regulations have been established to improve the collective gas mileage, cut greenhouse gas emissions, and reduce dependence on foreign oil. This paper primarily focuses on two major thrust areas of PHEVs. First, it introduces a grid-friendly bidirectional alternating current/direct current ac/dc-dc/ac rectifier/inverter for facilitating vehicle-to-grid (V2G) integration of PHEVs. Second, it presents an integrated bidirectional noninverted buck-boost converter that interfaces the energy storage device of the PHEV to the dc link in both grid-connected and driving modes. The proposed bidirectional converter has minimal grid-level disruptions in terms of power factor and total harmonic distortion, with less switching noise. The integrated bidirectional dc/dc converter assists the grid interface converter to track the charge/discharge power of the PHEV battery. In addition, while driving, the dc/dc converter provides a regulated dc link voltage to the motor drive and captures the braking energy during regenerative braking.",
"title": ""
},
{
"docid": "cc2e24cd04212647f1c29482aa12910d",
"text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.",
"title": ""
},
{
"docid": "2c9e59fbd7d6dd7254c9a055e6e789ca",
"text": "In this thesis we address the problem of perception, modeling, and use of context information from body-worn sensors for wearable computers. A context model is an abstraction of the user’s situation that is intelligible to the user, perceivable by sensors, and on the right level of abstraction, so that applications can use it to adapt their behavior. The issues of perception, modeling, and use of context information are thus strongly interrelated. Embedded in two application scenarios, we make contributions to the extraction of context from acceleration and audio sensors, the modeling of human interruptibility, and it’s estimation from acceleration, audio, and location sensors. We investigate the extraction of context from acceleration and audio data. We use body-worn acceleration sensors to classify the user’s physical activity. We developed a sensing platform which allows to record data from 12 three-dimensional acceleration sensors distributed over the body of the user. We classify activity of different complexity, such as sitting, walking, and writing on a white board, using a naïve Bayes’ classifier. We investigate which sensor placement on the body is best for recognizing such activities. We use auditory scene classification to extract context information about the social situation of the user. We classify the auditory scene of the user in street, restaurant, lecture, and conversation (plus a garbage class). We investigate which features are best suited for such a classification, and which feature selection mechanisms, sampling rates, and recognition windows are appropriate. The first application scenario is a meeting recorder that records not only audio and video of a meeting, but also additional, personal annotations from the user’s context. In this setting we make first contributions to the extraction of context information. We use acceleration sensors for simple activity recognition, and audio to identify different speakers in the recording and thus infer the flow of discussion and presentation. For the second application scenario, the estimation of the user’s interruptibility for automatic mediation of notifications, we developed a (context) model of human interruptibility. It distinguishes between the interruptibility of the user and that of the environment. We evaluate the model in a user study. We propose an algorithm to estimate the interruptibility within this model from sensor data. It combines low-level context information using so-called tendencies. A first version works on context from classifers trained in a supervised manner and uses hand-crafted tendencies. Although the algorithm produces good results with some 88-92% recognition score, it does not scale to large numbers of low-level contexts limiting the extensibility of the system. An improved version uses automatically found low-level contexts and learns the tendencies automatically. It thus allows to easily add new sensors and to adapt the system during run-time. We evaluated the algorithm on a data set of up to two days and obtained recognition scores of 90-97%.",
"title": ""
},
{
"docid": "483b57bef1158ae37c43ca9a92c1cda3",
"text": "Recently, advanced driver assistance system (ADAS) has attracted a lot of attention due to the fast growing industry of smart cars, which is believed to be the next human-computer interaction after smart phones. As ADAS is a critical enabling component in a human-in-the-loop cyber-physical system (CPS) involving complicated physical environment, it has stringent requirements on reliability, accuracy as well as latency. Lane and vehicle detections are the basic functions in ADAS, which provide lane departure warning (LDW) and forward collision warning (FCW) to predict the dangers and warn the drivers. While extensive literature exists on this topic, none of them considers the important fact that many vehicles today do not have powerful embedded electronics or cameras. It will be costly to upgrade the vehicle just for ADAS enhancement. To address this issue, we demonstrate a new framework that utilizes microprocessors in mobile devices with embedded cameras for advanced driver assistance. The main challenge that comes with this low cost solution is the dilemma between limited computing power and tight latency requirement, and uncalibrated camera and high accuracy requirement. Accordingly, we propose an efficient, accurate, flexible yet light-weight real-time lane and vehicle detection method and implement it on Android devices. Real road test results suggest that an average latency of 15 fps can be achieved with a high accuracy of 12.58 average pixel offset for each lane in all scenarios and 97+ precision for vehicle detection. To the best of the authors' knowledge, this is the very first implementation of both lane and vehicle detections on mobile devices with un-calibrated embedded camera.",
"title": ""
},
{
"docid": "39492127ee68a86b33a8a120c8c79f5d",
"text": "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/ √ t) for convex functions and O(log t/t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named GraphGuided SVM is proposed to demonstrate the usefulness of our algorithm.",
"title": ""
},
{
"docid": "574aca6aa63dd17949fcce6a231cf2d3",
"text": "This paper presents an algorithm for segmenting the hair region in uncontrolled, real life conditions images. Our method is based on a simple statistical hair shape model representing the upper hair part. We detect this region by minimizing an energy which uses active shape and active contour. The upper hair region then allows us to learn the hair appearance parameters (color and texture) for the image considered. Finally, those parameters drive a pixel-wise segmentation technique that yields the desired (complete) hair region. We demonstrate the applicability of our method on several real images.",
"title": ""
},
{
"docid": "05da057559ac24f6780801aebd49cd48",
"text": "The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection.",
"title": ""
},
{
"docid": "a5090b67307b2efa1f8ae7d6a212a6ff",
"text": "Providing highly flexible connectivity is a major architectural challenge for hardware implementation of reconfigurable neural networks. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that multicast mesh NoC provides the highest performance/cost ratio and consequently it is the most suitable interconnect architecture for configurable neural network implementation. Routing table size requirements and their impact on scalability were analyzed. Modular hierarchical architecture based on multicast mesh NoC is proposed to allow large scale neural networks emulation. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "14ca9dfee206612e36cd6c3b3e0ca61e",
"text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.",
"title": ""
},
{
"docid": "09f3bb814e259c74f1c42981758d5639",
"text": "PURPOSE OF REVIEW\nThe application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases.\n\n\nRECENT FINDINGS\nMachine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies.\n\n\nSUMMARY\nOverall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.",
"title": ""
}
] |
scidocsrr
|
acc2b8c3176e647f9c8868f0522e841e
|
A Glucose Fuel Cell for Implantable Brain–Machine Interfaces
|
[
{
"docid": "21511302800cd18d21dbc410bec3cbb2",
"text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.",
"title": ""
}
] |
[
{
"docid": "e56e5ed8e1122efcfb30e1c0e24cac9f",
"text": "In this paper, a simple lamination stacking method for the teeth of an axial flux permanent-magnet synchronous machine with concentrated stator windings is proposed. In this simple lamination stacking method, only two lamination profiles are used and are stacked alternately. To evaluate the performance of this stacking method, a comparison is made between the proposed method with two profiles and a conventional stacking method that uses different profiles for each lamination layer, using a multilayer 2-D finite element model.",
"title": ""
},
{
"docid": "08af1b80f0e58fbaa75a5a61b9a716e3",
"text": "Case Based Reasoning (CBR) is an important technique in artificial intelligence, which has been applied to various kinds of problems in a wide range of domains. Selecting case representation formalism is critical for the proper operation of the overall CBR system. In this paper, we survey and evaluate all of the existing case representation methodologies. Moreover, the case retrieval and future challenges for effective CBR are explained. Case representation methods are grouped in to knowledge-intensive approaches and traditional approaches. The first group overweight the second one. The first methods depend on ontology and enhance all CBR processes including case representation, retrieval, storage, and adaptation. By using a proposed set of qualitative metrics, the existing methods based on ontology for case representation are studied and evaluated in details. All these systems have limitations. No approach exceeds 53% of the specified metrics. The results of the survey explain the current limitations of CBR systems. It shows that ontology usage in case representation needs improvements to achieve semantic representation and semantic retrieval in CBR system. Keywords—Case based reasoning; Ontological case representation; Case retrieval; Clinical decision support system; Knowledge management",
"title": ""
},
{
"docid": "835dbc5d1c45d991fece5bb29f961bec",
"text": "Use of PET/MR in children has not previously been reported, to the best of our knowledge. Children with systemic malignancies may benefit from the reduced radiation exposure offered by PET/MR. We report our initial experience with PET/MR hybrid imaging and our current established sequence protocol after 21 PET/MR studies in 15 children with multifocal malignant diseases. The effective dose of a PET/MR scan was only about 20% that of the equivalent PET/CT examination. Simultaneous acquisition of PET and MR data combines the advantages of the two previously separate modalities. Furthermore, the technique also enables whole-body diffusion-weighted imaging (DWI) and statements to be made about the biological cellularity and nuclear/cytoplasmic ratio of tumours. Combined PET/MR saves time and resources. One disadvantage of PET/MR is that in order to have an effect, a significantly longer examination time is needed than with PET/CT. In our initial experience, PET/MR has turned out to be an unexpectedly stable and reliable hybrid imaging modality, which generates a complementary diagnostic study of great additional value.",
"title": ""
},
{
"docid": "28cfe864acc8c40eb8759261273cf3bb",
"text": "Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an $\\left[O\\left(1\\slash V\\right),O\\left(V\\right)\\right]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance.",
"title": ""
},
{
"docid": "3d0ecd7acf079c2eed54b73e82ea4be5",
"text": "Nowadays, the increasing quantity of municipal solid waste has causes serious environmental problem which requires a better solution in handling the wastes that generate. Construction waste is considered as part of the municipal solid waste. Construction wastes that produce in the construction process contributes a large amount to municipal solid waste. For that reason, a proper way of handling construction wastes is significance in reducing the negative impacts towards the environment, social, and economy. Sustainable waste management is introduced to maintain the balance between the environment, social and economic aspects through several ways such as acts implementation, and techniques in managing waste. Therefore, it is essential to identify current waste management system adopted by industry in order to make adjustment and improvement in moving towards sustainable waste management. This paper highlights the current waste management system implemented in Malaysia and the challenges in applying the concept of sustainability into waste management through reviewing past similar researches. This research has conducted an exploratory interview to six industry practitioners in both private and government sector in Malaysia. The results obtained show the current waste management systems applied in Malaysia and factors that hinder the concept of sustainability into waste management. It allows a major shift in Malaysia waste management by improvise current waste management technology into more sustainable way. Keywords—waste management system; construction wastes sustainability; technology",
"title": ""
},
{
"docid": "c69e805751421b516e084498e7fc6f44",
"text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.",
"title": ""
},
{
"docid": "ec06587bff3d5c768ab9083bd480a875",
"text": "Wireless sensor networks are an emerging technology for low-cost, unattended monitoring of a wide range of environments, and their importance has been enforced by the recent delivery of the IEEE 802.15.4 standard for the physical and MAC layers and the forthcoming Zigbee standard for the network and application layers. The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.",
"title": ""
},
{
"docid": "b19f473f77b20dcb566fded46100a71b",
"text": "Large amount of information are available online on web.The discussion forum, review sites, blogs are some of the opinion rich resources where review or posted articles is their sentiment, or overall opinion towards the subject matter. The opinions obtained from those can be classified in to positive or negative which can be used by customer to make product choice and by businessmen for finding customer satisfaction .This paper studies online movie reviews using sentiment analysis approaches. In this study, sentiment classification techniques were applied to movie reviews. Specifically, we compared two supervised machine learning approaches SVM, Navie Bayes for Sentiment Classification of Reviews. Results states that Naïve Bayes approach outperformed the svm. If the training dataset had a large number of reviews, Naive bayes approach reached high accuracies as compare to other.",
"title": ""
},
{
"docid": "2bf678c98d27501443f0f6fdf35151d7",
"text": "The goal of video summarization is to distill a raw video into a more compact form without losing much semantic information. However, previous methods mainly consider the diversity and representation interestingness of the obtained summary, and they seldom pay sufficient attention to semantic information of resulting frame set, especially the long temporal range semantics. To explicitly address this issue, we propose a novel technique which is able to extract the most semantically relevant video segments (i.e., valid for a long term temporal duration) and assemble them into an informative summary. To this end, we develop a semantic attended video summarization network (SASUM) which consists of a frame selector and video descriptor to select an appropriate number of video shots by minimizing the distance between the generated description sentence of the summarized video and the human annotated text of the original video. Extensive experiments show that our method achieves a superior performance gain over previous methods on two benchmark datasets.",
"title": ""
},
{
"docid": "554a0628270978757eda989c67ac3416",
"text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.",
"title": ""
},
{
"docid": "8335faee33da234e733d8f6c95332ec3",
"text": "Myanmar script uses no space between words and syllable segmentation represents a significant process in many NLP tasks such as word segmentation, sorting, line breaking and so on. In this study, a rulebased approach of syllable segmentation algorithm for Myanmar text is proposed. Segmentation rules were created based on the syllable structure of Myanmar script and a syllable segmentation algorithm was designed based on the created rules. A segmentation program was developed to evaluate the algorithm. A training corpus containing 32,283 Myanmar syllables was tested in the program and the experimental results show an accuracy rate of 99.96% for segmentation.",
"title": ""
},
{
"docid": "f153ee3853f40018ed0ae8b289b1efcf",
"text": "In this paper, the common mode (CM) EMI noise characteristic of three popular topologies of resonant converter (LLC, CLL and LCL) is analyzed. The comparison of their EMI performance is provided. A state-of-art LLC resonant converter with matrix transformer is used as an example to further illustrate the CM noise problem of resonant converters. The CM noise model of LLC resonant converter is provided. A novel method of shielding is provided for matrix transformer to reduce common mode noise. The CM noise of LLC converter has a significantly reduction with shielding. The loss of shielding is analyzed by finite element analysis (FEA) tool. Then the method to reduce the loss of shielding is discussed. There is very little efficiency sacrifice for LLC converter with shielding according to the experiment result.",
"title": ""
},
{
"docid": "e75669b68e8736ee6044443108c00eb1",
"text": "UNLABELLED\nThe evolution in adhesive dentistry has broadened the indication of esthetic restorative procedures especially with the use of resin composite material. Depending on the clinical situation, some restorative techniques are best indicated. As an example, indirect adhesive restorations offer many advantages over direct techniques in extended cavities. In general, the indirect technique requires two appointments and a laboratory involvement, or it can be prepared chairside in a single visit either conventionally or by the use of computer-aided design/computer-aided manufacturing systems. In both cases, there will be an extra cost as well as the need of specific materials. This paper describes the clinical procedures for the chairside semidirect technique for composite onlay fabrication without the use of special equipments. The use of this technique combines the advantages of the direct and the indirect restoration.\n\n\nCLINICAL SIGNIFICANCE\nThe semidirect technique for composite onlays offers the advantages of an indirect restoration and low cost, and can be the ideal treatment option for extended cavities in case of financial limitations.",
"title": ""
},
{
"docid": "77985effa998d08e75eaa117e07fc7a9",
"text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.",
"title": ""
},
{
"docid": "993137842ece533ab9d1f8737904fc5c",
"text": "In this paper, we propose co-prime arrays for effective direction-of-arrival (DOA) estimation. To fully utilize the virtual aperture achieved in the difference co-array constructed from a co-prime array structure, sparsity-based spatial spectrum estimation technique is exploited. Compared to existing techniques, the proposed technique achieves better utilization of the co-array aperture and thus results in increased degrees-of-freedom as well as improved DOA estimation performance.",
"title": ""
},
{
"docid": "fc26ebb8329c84d96a714065117dda02",
"text": "Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.",
"title": ""
},
{
"docid": "1a8bcfab4c66a3ac7b1b1112be46911a",
"text": "Despite the widespread assumption that students require scaffolding support for self-regulated learning (SRL) processes in computer-based learning environments (CBLEs), there is little clarity as to which types of scaffolds are most effective. This study offers a literature review covering the various scaffolds that support SRL processes in the domain of science education. Effective scaffolds are categorized and discussed according to the different areas and phases of SRL. The results reveal that most studies on scaffolding processes focus on cognition, whereas few focus on the non-cognitive areas of SRL. In the field of cognition, prompts appear to be the most effective scaffolds, especially for processes during the control phase. This review also shows that studies have paid little attention to scaffold designs, learner characteristics, or various task characteristics, despite the fact that these variables have been found to have a significant influence. We conclude with the implications of our results on future design and research in the field of SRL using CBLEs.",
"title": ""
},
{
"docid": "f5aa0531d2b560b4bddd8f93d308f5bc",
"text": "Even well-designed software systems suffer from chronic performance degradation, also known as “software aging”, due to internal (e.g., software bugs) or external (e.g., resource exhaustion) impairments. These chronic problems often fly under the radar of software monitoring systems before causing severe impacts (e.g., system failures). Therefore, it is a challenging issue how to timely predict the occurrence of failures caused by these problems. Unfortunately, the effectiveness of prior approaches are far from satisfactory due to the insufficiency of aging indicators adopted by them. To accurately predict failures caused by software aging which are named as Aging-Related Failure (ARFs), this paper presents a novel entropy-based aging indicator, namely Multidimensional Multi-scale Entropy (MMSE) which leverages the complexity embedded in runtime performance metrics to indicate software aging. To the best of our knowledge, this is the first time to leverage entropy to predict ARFs. Based upon MMSE, we implement three failure prediction approaches encapsulated in a proof-of-concept prototype named ARF-Predictor. The experimental evaluations in a Video on Demand (VoD) system, and in a real-world production system, AntVision, show that ARF-Predictor can predict ARFs with a very high accuracy and a low <italic>Ahead-Time-To-Failure (<inline-formula> <tex-math notation=\"LaTeX\">$ATTF$</tex-math><alternatives><inline-graphic xlink:href=\"chen-ieq1-2604381.gif\"/> </alternatives></inline-formula>)</italic>. Compared to previous approaches, ARF-Predictor improves the prediction accuracy by about 5 times and reduces <inline-formula><tex-math notation=\"LaTeX\">$ATTF$</tex-math><alternatives> <inline-graphic xlink:href=\"chen-ieq2-2604381.gif\"/></alternatives></inline-formula> even by 3 orders of magnitude. In addition, ARF-Predictor is light-weight enough to satisfy the real-time requirement.",
"title": ""
},
{
"docid": "088d6f1cd3c19765df8a16cd1a241d18",
"text": "Legged robots need to be able to classify and recognize different terrains to adapt their gait accordingly. Recent works in terrain classification use different types of sensors (like stereovision, 3D laser range, and tactile sensors) and their combination. However, such sensor systems require more computing power, produce extra load to legged robots, and/or might be difficult to install on a small size legged robot. In this work, we present an online terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using either Scale Invariant Feature Transform (SIFT) or Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs) with a radial basis function kernel. We compare this feature-based approach with a color-based approach on the Caltech-256 benchmark as well as eight different terrain image sets (grass, gravel, pavement, sand, asphalt, floor, mud, and fine gravel). For terrain images, we observe up to 90% accuracy with the feature-based approach. Finally, this online terrain classification system is successfully applied to our small hexapod robot AMOS II. The output of the system providing terrain information is used as an input to its neural locomotion control to trigger an energy-efficient gait while traversing different terrains.",
"title": ""
},
{
"docid": "30604dca66bbf3f0abe63c101f02e434",
"text": "This paper presents a novel feature based parameterization approach of human bodies from the unorganized cloud points and the parametric design method for generating new models based on the parameterization. The parameterization consists of two phases. Firstly, the semantic feature extraction technique is applied to construct the feature wireframe of a human body from laser scanned 3D unorganized points. Secondly, the symmetric detail mesh surface of the human body is modeled. Gregory patches are utilized to generate G 1 continuous mesh surface interpolating the curves on feature wireframe. After that, a voxel-based algorithm adds details on the smooth G 1 continuous surface by the cloud points. Finally, the mesh surface is adjusted to become symmetric. Compared to other template fitting based approaches, the parameterization approach introduced in this paper is more efficient. The parametric design approach synthesizes parameterized sample models to a new human body according to user input sizing dimensions. It is based on a numerical optimization process. The strategy of choosing samples for synthesis is also introduced. Human bodies according to a wide range of dimensions can be generated by our approach. Different from the mathematical interpolation function based human body synthesis methods, the models generated in our method have the approximation errors minimized. All mannequins constructed by our approach have consistent feature patches, which benefits the design automation of customized clothes around human bodies a lot.",
"title": ""
}
] |
scidocsrr
|
234ef7d1b6c019cbf7144b32ced5f02a
|
Frankly, We Do Give a Damn: The Relationship Between Profanity and Honesty.
|
[
{
"docid": "1c16eec32b941af1646843bb81d16b5f",
"text": "Facebook is rapidly gaining recognition as a powerful research tool for the social sciences. It constitutes a large and diverse pool of participants, who can be selectively recruited for both online and offline studies. Additionally, it facilitates data collection by storing detailed records of its users' demographic profiles, social interactions, and behaviors. With participants' consent, these data can be recorded retrospectively in a convenient, accurate, and inexpensive way. Based on our experience in designing, implementing, and maintaining multiple Facebook-based psychological studies that attracted over 10 million participants, we demonstrate how to recruit participants using Facebook, incentivize them effectively, and maximize their engagement. We also outline the most important opportunities and challenges associated with using Facebook for research, provide several practical guidelines on how to successfully implement studies on Facebook, and finally, discuss ethical considerations.",
"title": ""
}
] |
[
{
"docid": "530ef3f5d2f7cb5cc93243e2feb12b8e",
"text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.",
"title": ""
},
{
"docid": "77b1507ce0e732b3ac93d83f1a5971b3",
"text": "Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier technology for high data rate communication system. The basic principle of OFDM i s to divide the available spectrum into parallel channel s in order to transmit data on these channels at a low rate. The O FDM concept is based on the fact that the channels refe rr d to as carriers are orthogonal to each other. Also, the fr equency responses of the parallel channels are overlapping. The aim of this paper is to simulate, using GNU Octave, an OFD M transmission under Additive White Gaussian Noise (AWGN) and/or Rayleigh fading and to analyze the effects o f these phenomena.",
"title": ""
},
{
"docid": "d54e33049b3f5170ec8bd09d8f17c05c",
"text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.",
"title": ""
},
{
"docid": "4b6b9539468db238d92e9762b2650b61",
"text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: m.rosemann@qut.edu.au J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke",
"title": ""
},
{
"docid": "9710abc9bc114470e25a4c12af58dc90",
"text": "The growth of the mobile phone users has led to a dramatic increase in SMS spam messages. Though in most parts of the world, mobile messaging channel is currently regarded as “clean” and trusted, on the contrast recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. It is an evolving setback especially in the Middle East and Asia. SMS spam filtering is a comparatively recent errand to deal such a problem. It inherits many concerns and quick fixes from Email spam filtering. However it fronts its own certain issues and problems. This paper inspires to work on the task of filtering mobile messages as Ham or Spam for the Indian Users by adding Indian messages to the worldwide available SMS dataset. The paper analyses different machine learning classifiers on large corpus of SMS messages for Indian people.",
"title": ""
},
{
"docid": "d3ac14fd1ac21c4d67060ab914859247",
"text": "Decision making in uncertain and risky environments is a prominent area of research. Standard economic theories fail to fully explain human behaviour, while a potentially promising alternative may lie in the direction of Reinforcement Learning (RL) theory. We analyse data for 46 players extracted from a financial market online game and test whether Reinforcement Learning (Q-Learning) could capture these players behaviour using a riskiness measure based on financial modeling. Moreover we test an earlier hypothesis that players are “naíve” (short-sighted). Our results indicate that Reinforcement Learning is a component of the decision-making process. We also find that there is a significant improvement of fitting for some of the players when using a full RL model against a reduced version (myopic), where only immediate reward is valued by the players, indicating that not all players are naíve.",
"title": ""
},
{
"docid": "ed83ce40780419961a8c5eeca780636e",
"text": "Some objects in our environment are strongly tied to motor actions, a phenomenon called object affordance. A cup, for example, affords us to reach out to it and grasp it by its handle. Studies indicate that merely viewing an affording object triggers motor activations in the brain. The present study investigated whether object affordance would also result in an attention bias, that is, whether observers would rather attend to graspable objects within reach compared to non-graspable but reachable objects or to graspable objects out of reach. To this end, we conducted a combined reaction time and motion tracking study with a table in a virtual three-dimensional space. Two objects were positioned on the table, one near, the other one far from the observer. In each trial, two graspable objects, two non-graspable objects, or a combination of both was presented. Participants were instructed to detect a probe appearing on one of the objects as quickly as possible. Detection times served as indirect measure of attention allocation. The motor association with the graspable object was additionally enhanced by having participants grasp a real object in some of the trials. We hypothesized that visual attention would be preferentially allocated to the near graspable object, which should be reflected in reduced reaction times in this condition. Our results confirm this assumption: probe detection was fastest at the graspable object at the near position compared to the far position or to a non-graspable object. A follow-up experiment revealed that in addition to object affordance per se, immediate graspability of an affording object may also influence this near-space advantage. Our results suggest that visuospatial attention is preferentially allocated to affording objects which are immediately graspable, and thus establish a strong link between an object' s motor affordance and visual attention.",
"title": ""
},
{
"docid": "972be3022e7123be919d9491a6dafe1c",
"text": "An improved coaxial high-voltage vacuum insulator applied in a Tesla-type generator, model TPG700, has been designed and tested for high-power microwave (HPM) generation. The design improvements include: changing the connection type of the insulator to the conductors from insertion to tangential, making the insulator thickness uniform, and using Nylon as the insulation material. Transient field simulation shows that the electric field (E-field) distribution within the improved insulator is much more uniform and that the average E-field on the two insulator surfaces is decreased by approximately 30% compared with the previous insulator at a voltage of 700 kV. Key structures such as the anode and the cathode shielding rings of the insulator have been optimized to significantly reduce E-field stresses. Aging experiments and experiments for HPM generation with this insulator were conducted based on a relativistic backward-wave oscillator. The preliminary test results show that the output voltage is larger than 700 kV and the HPM power is about 1 GW. Measurements show that the insulator is well within allowable E-field stresses on both the vacuum insulator surface and the cathode shielding ring.",
"title": ""
},
{
"docid": "e066761ecb7d8b7468756fb4be6b8fcb",
"text": "The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.",
"title": ""
},
{
"docid": "426d3b0b74eacf4da771292abad06739",
"text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.",
"title": ""
},
{
"docid": "71d065cd109392ae41bc96fe0cd2e0f4",
"text": "Absence of an upper limb leads to severe impairments in everyday life, which can further influence the social and mental state. For these reasons, early developments in cosmetic and body-driven prostheses date some centuries ago, and they have been evolving ever since. Following the end of the Second World War, rapid developments in technology resulted in powered myoelectric hand prosthetics. In the years to come, these devices were common on the market, though they still suffered high user abandonment rates. The reasons for rejection were trifold - insufficient functionality of the hardware, fragile design, and cumbersome control. In the last decade, both academia and industry have reached major improvements concerning technical features of upper limb prosthetics and methods for their interfacing and control. Advanced robotic hands are offered by several vendors and research groups, with a variety of active and passive wrist options that can be articulated across several degrees of freedom. Nowadays, elbow joint designs include active solutions with different weight and power options. Control features are getting progressively more sophisticated, offering options for multiple sensor integration and multi-joint articulation. Latest developments in socket designs are capable of facilitating implantable and multiple surface electromyography sensors in both traditional and osseointegration-based systems. Novel surgical techniques in combination with modern, sophisticated hardware are enabling restoration of dexterous upper limb functionality. This article is aimed at reviewing the latest state of the upper limb prosthetic market, offering insights on the accompanying technologies and techniques. We also examine the capabilities and features of some of academia's flagship solutions and methods.",
"title": ""
},
{
"docid": "c0f958c7bb692f8a405901796445605a",
"text": "Thickening is the first step in the design of sustainable (cost effective, environmentally friendly, and socially viable) tailings management solutions for surface deposition, mine backfilling, and sub-aqueous discharge. The high water content slurries are converted to materials with superior dewatering properties by adding long-chain synthetic polymers. Given the solid and liquid composition of a slurry, a high settling rate alongside a high solids content can be achieved by optimizing the various polymers parameters: ionic type (T), charge density (C), molecular weight (M), and dosage (D). This paper developed a statistical model to predict field performance of a selected metal mine slurry using laboratory test data. Results of sedimentationconsolidation tests were fitted using the method of least squares. A newly devised polymer characteristic coefficient (Cp) that combined the various polymer parameters correlated well with the observed dewatering behavior as the R equalled 0.95 for void ratio and 0.84 for hydraulic conductivity. The various combinations of polymer parameters resulted in variable slurry performance during sedimentation and were found to converge during consolidation. Further, the void ratio-effective stress and the hydraulic conductivity-void ratio relationships were found to be e = a σ′ b and k = 10 (c + e , respectively.",
"title": ""
},
{
"docid": "05b6f9f32fa55a320533519f96ed2457",
"text": "INTRODUCTION\nThe anatomy of the facial artery, its tortuosity, and branch patterns are well documented. To date, a reliable method of identifying the facial artery, based on surface landmarks, has not been described. The purpose of this study is to characterize the relationship of the facial artery with several facial topographic landmarks, and to identify a location where the facial artery could predictably be identified.\n\n\nMETHODS\nFollowing institutional review board approval, 20 hemifacial dissections on 10 cadaveric heads were performed. Distances from the facial artery to the oral commissure, mandibular angle, lateral canthus, and Manson's point were measured. Distances were measured and confirmed clinically using Doppler examination in 20 hemifaces of 10 healthy volunteers.\n\n\nRESULTS\nManson's point identifies the facial artery with 100% accuracy and precision, within a 3 mm radius in both cadaveric specimens and living human subjects. Cadaveric measurements demonstrated that the facial artery is located 19 mm ± 5.5 from the oral commissure, 31 mm ± 6.8 from the mandibular angle, 92 mm ± 8.0 from the lateral canthus. Doppler examination on healthy volunteers (5 male, 5 female) demonstrated measurements of 18 mm ± 4.0, 50 mm ± 6.4, and 79 mm ± 8.2, respectively.\n\n\nCONCLUSIONS\nThe identification of the facial artery is critical for the craniofacial surgeon in order to avoid inadvertent injury, plan for local flaps, and in preparation of a recipient vessel for free tissue microvascular reconstruction. Manson's point can aid the surgeon in consistently indentifying the facial artery.",
"title": ""
},
{
"docid": "01a70ee73571e848575ed992c1a3a578",
"text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.",
"title": ""
},
{
"docid": "8787335d8f5a459dc47b813fd385083b",
"text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.",
"title": ""
},
{
"docid": "05227ab021e31353700c82eb2a3375bd",
"text": "Human Computer Interaction is one of the pervasive application areas of computer science to develop with multimodal interaction for information sharings. The conversation agent acts as the major core area for developing interfaces between a system and user with applied AI for proper responses. In this paper, the interactive system plays a vital role in improving knowledge in the domain of health through the intelligent interface between machine and human with text and speech. The primary aim is to enrich the knowledge and help the user in the domain of health using conversation agent to offer immediate response with human companion feel.",
"title": ""
},
{
"docid": "3cc74bce3c395b82dac437286aace591",
"text": "We present a technique for simulating plastic deformation in sheets of thin materials, such as crumpled paper, dented metal, and wrinkled cloth. Our simulation uses a framework of adaptive mesh refinement to dynamically align mesh edges with folds and creases. This framework allows efficient modeling of sharp features and avoids bend locking that would be otherwise caused by stiff in-plane behavior. By using an explicit plastic embedding space we prevent remeshing from causing shape diffusion. We include several examples demonstrating that the resulting method realistically simulates the behavior of thin sheets as they fold and crumple.",
"title": ""
},
{
"docid": "3e0d7fb26382b9151f50ef18dc40b97a",
"text": "A. Redish et al. (2007) proposed a reinforcement learning model of context-dependent learning and extinction in conditioning experiments, using the idea of \"state classification\" to categorize new observations into states. In the current article, the authors propose an interpretation of this idea in terms of normative statistical inference. They focus on renewal and latent inhibition, 2 conditioning paradigms in which contextual manipulations have been studied extensively, and show that online Bayesian inference within a model that assumes an unbounded number of latent causes can characterize a diverse set of behavioral results from such manipulations, some of which pose problems for the model of Redish et al. Moreover, in both paradigms, context dependence is absent in younger animals, or if hippocampal lesions are made prior to training. The authors suggest an explanation in terms of a restricted capacity to infer new causes.",
"title": ""
},
{
"docid": "59d6765507415b0365f3193843d01459",
"text": "Password typing is the most widely used identity verification method in World Wide Web based Electronic Commerce. Due to its simplicity, however, it is vulnerable to imposter attacks. Keystroke dynamics and password checking can be combined to result in a more secure verification system. We propose an autoassociator neural network that is trained with the timing vectors of the owner's keystroke dynamics and then used to discriminate between the owner and an imposter. An imposter typing the correct password can be detected with very high accuracy using the proposed approach. This approach can be effectively implemented by a Java applet and used in the World Wide Web.",
"title": ""
},
{
"docid": "72c79181572c836cb92aac8fe7a14c5d",
"text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).",
"title": ""
}
] |
scidocsrr
|
bad4a0b7b7b20c555f6a52a0628b0e85
|
Collocated Intergenerational Console Gaming
|
[
{
"docid": "9d089af812c0fdd245a218362d88b62a",
"text": "Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. This raises a new design challenge for HCI - how should spectators experience a performer's interaction with a computer? We classify public interfaces (including examples from art, performance and exhibition design) according to the extent to which a performer's manipulations of an interface and their resulting effects are hidden, partially revealed, fully revealed or even amplified for spectators. Our taxonomy uncovers four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they tend to be revealed enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent but effects are only revealed as the spectator takes their turn.",
"title": ""
},
{
"docid": "d2f7b25a45d3706ef7bbdc2764bc129b",
"text": "In this paper, we present results from a qualitative study of collocated group console gaming. We focus on motivations for, perceptions of, and practices surrounding the shared use of console games by a variety of established groups of gamers. These groups include both intragenerational groups of youth, adults, and elders as well as intergenerational families. Our analysis highlights the numerous ways that console games serve as a computational meeting place for a diverse population of gamers.",
"title": ""
}
] |
[
{
"docid": "bdb49f702123031d2ee935a387c9888e",
"text": "Standard state-machine replication involves consensus on a sequence of totally ordered requests through, for example, the Paxos protocol. Such a sequential execution model is becoming outdated on prevalent multi-core servers. Highly concurrent executions on multi-core architectures introduce non-determinism related to thread scheduling and lock contentions, and fundamentally break the assumption in state-machine replication. This tension between concurrency and consistency is not inherent because the total-ordering of requests is merely a simplifying convenience that is unnecessary for consistency. Concurrent executions of the application can be decoupled with a sequence of consensus decisions through consensus on partial-order traces, rather than on totally ordered requests, that capture the non-deterministic decisions in one replica execution and to be replayed with the same decisions on others. The result is a new multi-core friendly replicated state-machine framework that achieves strong consistency while preserving parallelism in multi-thread applications. On 12-core machines with hyper-threading, evaluations on typical applications show that we can scale with the number of cores, achieving up to 16 times the throughput of standard replicated state machines.",
"title": ""
},
{
"docid": "b974a8d8b298bfde540abc451f76bf90",
"text": "This chapter provides information on commonly used equipment in industrial mammalian cell culture, with an emphasis on bioreactors. The actual equipment used in the cell culture process can vary from one company to another, but the main steps remain the same. The process involves expansion of cells in seed train and inoculation train processes followed by cultivation of cells in a production bioreactor. Process and equipment options for each stage of the cell culture process are introduced and examples are provided. Finally, the use of disposables during seed train and cell culture production is discussed.",
"title": ""
},
{
"docid": "4f527bddf622c901a7894ce7cc381ee1",
"text": "Most popular programming languages support situations where a value of one type is converted into a value of another type without any explicit cast. Such implicit type conversions, or type coercions, are a highly controversial language feature. Proponents argue that type coercions enable writing concise code. Opponents argue that type coercions are error-prone and that they reduce the understandability of programs. This paper studies the use of type coercions in JavaScript, a language notorious for its widespread use of coercions. We dynamically analyze hundreds of programs, including real-world web applications and popular benchmark programs. We find that coercions are widely used (in 80.42% of all function executions) and that most coercions are likely to be harmless (98.85%). Furthermore, we identify a set of rarely occurring and potentially harmful coercions that safer subsets of JavaScript or future language designs may want to disallow. Our results suggest that type coercions are significantly less evil than commonly assumed and that analyses targeted at real-world JavaScript programs must consider coercions. 1998 ACM Subject Classification D.3.3 Language Constructs and Features, F.3.2 Semantics of Programming Languages, D.2.8 Metrics",
"title": ""
},
{
"docid": "08c2f734622b3ba4c3d71373139b9d58",
"text": "International Journal of Exercise Science 6(4) : 310-319, 2013. This study was designed to compare the acute effect of self-myofascial release (SMR), postural alignment exercises, and static stretching on joint range-of-motion. Our sample included 27 participants (n = 14 males and n = 13 females) who had below average joint range-of-motion (specifically a sitand-reach score of 13.5 inches [34.3 cm] or less). All were university students 18–27 years randomly assigned to complete two 30–40-minute data collection sessions with each testing session consisting of three sit-and-reach measurements (which involved lumbar spinal flexion, hip flexion, knee extension, and ankle dorsiflexion) interspersed with two treatments. Each treatment included foam-rolling, postural alignment exercises, or static stretching. Participants were assigned to complete session 1 and session 2 on two separate days, 24 hours to 48 hours apart. The data were analyzed so carryover effects could be estimated and showed that no single acute treatment significantly increased posterior mean sit-and-reach scores. However, significant gains (95% posterior probability limits) were realized with both postural alignment exercises and static stretching when used in combination with foam-rolling. For example, the posterior means equaled 1.71 inches (4.34 cm) when postural alignment exercises were followed by foam-rolling; 1.76 inches (4.47 cm) when foam-rolling was followed by static stretching; 1.49 inches (3.78 cm) when static stretching was followed by foam-rolling; and 1.18 inches (2.99 cm) when foam-rolling was followed by postural alignment exercises. Our results demonstrate that an acute treatment of foam-rolling significantly increased joint range-of-motion in participants with below average joint range-of-motion when combined with either postural alignment exercises or static stretching.",
"title": ""
},
{
"docid": "bf08bc98eb9ef7a18163fc310b10bcf6",
"text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.",
"title": ""
},
{
"docid": "26c4cded1181ce78cc9b61a668e57939",
"text": "Monitoring crop condition and production estimates at the state and county level is of great interest to the U.S. Department of Agriculture. The National Agricultural Statistical Service (NASS) of the U.S. Department of Agriculture conducts field interviews with sampled farm operators and obtains crop cuttings to make crop yield estimates at regional and state levels. NASS needs supplemental spatial data that provides timely information on crop condition and potential yields. In this research, the crop model EPIC (Erosion Productivity Impact Calculator) was adapted for simulations at regional scales. Satellite remotely sensed data provide a real-time assessment of the magnitude and variation of crop condition parameters, and this study investigates the use of these parameters as an input to a crop growth model. This investigation was conducted in the semi-arid region of North Dakota in the southeastern part of the state. The primary objective was to evaluate a method of integrating parameters retrieved from satellite imagery in a crop growth model to simulate spring wheat yields at the sub-county and county levels. The input parameters derived from remotely sensed data provided spatial integrity, as well as a real-time calibration of model simulated parameters during the season, to ensure that the modeled and observed conditions agree. A radiative transfer model, SAIL (Scattered by Arbitrary Inclined Leaves), provided the link between the satellite data and crop model. The model parameters were simulated in a geographic information system grid, which was the platform for aggregating yields at local and regional scales. A model calibration was performed to initialize the model parameters. This calibration was performed using Landsat data over three southeast counties in North Dakota. The model was then used to simulate crop yields for the state of North Dakota with inputs derived from NOAA AVHRR data. The calibration and the state level simulations are compared with spring wheat yields reported by NASS objective yield surveys. Introduction Monitoring agricultural crop conditions during the growing season and estimating the potential crop yields are both important for the assessment of seasonal production. Accurate and timely assessment of particularly decreased production caused by a natural disaster, such as drought or pest infestation, can be critical for countries where the economy is dependent on the crop harvest. Early assessment of yield reductions could avert a disastrous situation and help in strategic planning to meet the demands. The National Agricultural Statistics Service (NASS) of the U.S. Department of Agriculture (USDA) monitors crop conditions in the U.S. and provides monthly projected estimates of crop yield and production. NASS has developed methods to assess crop growth and development from several sources of information, including several types of surveys of farm operators. Field offices in each state are responsible for monitoring the progress and health of the crop and integrating crop condition with local weather information. This crop information is also distributed in a biweekly report on regional weather conditions. NASS provides monthly information to the Agriculture Statistics Board, which assesses the potential yields of all commodities based on crop condition information acquired from different sources. This research complements efforts to independently assess crop condition at the county, agricultural statistics district, and state levels. In the early 1960s, NASS initiated “objective yield” surveys for crops such as corn, soybean, wheat, and cotton in States with the greatest acreages (Allen et al., 1994). These surveys establish small sample units in randomly selected fields which are visited monthly to determine numbers of plants, numbers of fruits (wheat heads, corn ears, soybean pods, etc.), and weight per fruit. Yield forecasting models are based on relationships of samples of the same maturity stage in comparable months during the past four years in each State. Additionally, the Agency implemented a midyear Area Frame that enabled creation of probabilistic based acreage estimates. For major crops, sampling errors are as low as 1 percent at the U.S. level and 2 to 3 percent in the largest producing States. Accurate crop production forecasts require accurate forecasts of acreage at harvest, its geographic distribution, and the associated crop yield determined by local growing conditions. There can be significant year-to-year variability which requires a systematic monitoring capability. To quantify the complex effects of environment, soils, and management practices, both yield and acreage must be assessed at sub-regional levels where a limited range of factors and simple interactions permit modeling and estimation. A yield forecast within homogeneous soil type, land use, crop variety, and climate preclude the necessity for use of a complex forecast model. In 1974, the Large Area Crop Inventory Experiment (LACIE), a joint effort of the National Aeronautics and Space Administration (NASA), the USDA, and the National Oceanic and Atmospheric Administration (NOAA) began to apply satellite remote sensing technology on experimental bases to forecast harvests in important wheat producing areas (MacDonald, 1979). In 1977 LACIE in-season forecasted a 30 percent shortfall in Soviet spring wheat production that came within 10 percent of the official Soviet estimate that came several months after the harvest (Myers, 1983). P H O T O G R A M M E T R I C E N G I N E E R I N G & R E M O T E S E N S I N G Photogrammetric Engineering & Remote Sensing Vol. 69, No. 6, June 2003, pp. 665–674. 0099-1112/03/6906–665$3.00/0 © 2003 American Society for Photogrammetry and Remote Sensing P.C. Doraiswamy and A. Stern are with the USDA, ARS, Hydrology and Remote Sensing Lab, Bldg 007, Rm 104/ BARC West, Beltsville, MD 20705 (pdoraiswamy@ hydrolab.arsusda.gov). Sophie Moulin is with INRA/Unite Climat–Sol–Environnement, Domaine St paul, Site Agroparc, 84914 Avignon Cedex 9, France. P.W. Cook is with the USDA, National Agricultural Statistical Service, Research and Development Division, 3251 Old Lee Highway, Rm 305, Fairfax, VA 22030-1504. IPC_Grams_03-905 4/15/03 1:19 AM Page 1",
"title": ""
},
{
"docid": "c61559bdb209cf7098bb11c372a483c6",
"text": "This paper presents a lexicon model for the description of verbs, nouns and adjectives to be used in applicatons like sentiment analysis and opinion mining. The model aims to describe the detailed subjectivity relations that exist between the actors in a sentence expressing separate attitudes for each actor. Subjectivity relations that exist between the different actors are labeled with information concerning both the identity of the attitude holder and the orientation (positive vs. negative) of the attitude. The model includes a categorization into semantic categories relevant to opinion mining and sentiment analysis and provides means for the identification of the attitude holder and the polarity of the attitude and for the description of the emotions and sentiments of the different actors involved in the text. Special attention is paid to the role of the speaker/writer of the text whose perspective is expressed and whose views on what is happening are conveyed in the text. Finally, validation is provided by an annotation study that shows that these subtle subjectivity relations are reliably identifiable by human annotators.",
"title": ""
},
{
"docid": "70b410094dd718d10e6ae8cd3f93c768",
"text": "Software developers and project managers are struggling to assess the appropriateness of agile processes to their development environments. This paper identifies limitations that apply to many of the published agile processes in terms of the types of projects in which their application may be problematic. INTRODUCTION As more organizations seek to gain competitive advantage through timely deployment of Internet-based services, developers are under increasing pressure to produce new or enhanced implementations quickly [2,8]. Agile software development processes were developed primarily to address this problem, that is, the problem of developing software in \"Internet time\". Agile approaches utilize technical and managerial processes that continuously adapt and adjust to (1) changes derived from experiences gained during development, (2) changes in software requirements and (3) changes in the development environment. Agile processes are intended to support early and quick production of working code. This is accomplished by structuring the development process into iterations, where an iteration focuses on delivering working code and other artifacts that provide value to the customer and, secondarily, to the project. Agile process proponents and critics often emphasize the code focus of these processes. Proponents often argue that code is the only deliverable that matters, and marginalize the role of analysis and design models and documentation in software creation and evolution. Agile process critics point out that the emphasis on code can lead to corporate memory loss because there is little emphasis on producing good documentation and models to support software creation and evolution of large, complex systems. The claims made by agile process proponents and critics lead to questions about what practices, techniques, and infrastructures are suitable for software development in today’s rapidly changing development environments. In particular, answers to questions related to the suitability of agile processes to particular application domains and development environments are often based on anecdotal accounts of experiences. In this paper we present what we perceive as limitations of agile processes based on our analysis of published works on agile processes [14]. Processes that name themselves “agile” vary greatly in values, practices, and application domains. It is therefore difficult to assess agile processes in general and identify limitations that apply to all agile processes. Our analysis [14] is based on a study of assumptions underlying Extreme Programming (XP) [3,5,6,10], Scrum [12,13], Agile Unified Process [11], Agile Modeling [1] and the principles stated by the Agile Alliance. It is mainly an analytical study, supported by experiences on a few XP projects conducted by the authors. THE AGILE ALLIANCE In recent years a number of processes claiming to be \"agile\" have been proposed in the literature. To avoid confusion over what it means for a process to be \"agile\", seventeen agile process methodologists came to an agreement on what \"agility\" means during a 2001 meeting where they discussed future trends in software development processes. One result of the meeting was the formation of the \"Agile Alliance\" and the publication of its manifesto (see http://www.agilealliance.org/principles.html). The manifesto of the \"Agile Alliance\" is a condensed definition of the values and goals of \"Agile Software Development\". This manifesto is detailed through a number of common principles for agile processes. The principles are listed below. 1. \"Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.\" 2. \"Business people and developers must work together daily throughout the project.\" 3. \"Welcome changing requirements, even late in development.\" 4. \"Deliver working software frequently.\" 5. \"Working software is the primary measure of progress.\" 6. \"Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.\" 7. \"The best architectures, requirements, and designs emerge from self-organizing teams.\" 8. \"The most efficient and effective method of conveying information to and within a development team is face-toface conversation.\" 9. \"Agile processes promote sustainable development.\" 10. \"Continuous attention to technical excellence and good design enhances agility.\" 11. \"Simplicity is essential.\" 12. \"Project teams evaluate their effectiveness at regular intervals and adjust their behavior accordingly.\" [TFR02] D. Turk, R. France, B. Rumpe. Limitations of Agile Software Processes. In: Third International Conference on Extreme Programming and Flexible Processes in Software Engineering, XP2002, May 26-30, Alghero, Italy, pg. 43-46, 2002. www.se-rwth.de/publications AN ANALYSIS OF AGILE PROCESSES In this section we discuss the limitations of agile processes that we have identified, based on our analysis of the Agile Alliance principles and assumptions underlying agile processes. The next subsection lists the managerial and technical assumptions we identified in our study [14], and the following subsection discusses the limitations derived from the assumptions. Underlying Assumptions The stated benefits of agile processes over traditional prescriptive processes are predicated on the validity of these assumptions. These assumptions are discussed in more details in another paper [14]. Assumption 1: Customers are co-located with the development team and are readily available when needed by developers. Furthermore, the reliance on face-to-face communication requires that developers be located in close proximity to each other. Assumption 2: Documentation and software models do not play central roles in software development. Assumption 3: Software requirements and the environment in which software is developed evolve as the software is being developed. Assumption 4: Development processes that are dynamically adapted to changing project and product characteristics are more likely to produce high-quality products. Assumption 5: Developers have the experience needed to define and adapt their processes appropriately. In other words, an organization can form teams consisting of bright, highly-experienced problem solvers capable of effectively evolving their processes while they are being executed. Assumption 6: Project visibility can be achieved primarily through delivery of increments and a few metrics. Assumption 7: Rigorous evaluation of software artifacts (products and processes) can be restricted to frequent informal reviews and code testing. Assumption 8: Reusability and generality should not be goals of application-specific software development. Assumption 9: Cost of change does not dramatically increase over time. Assumption 10: Software can be developed in increments. Assumption 11: There is no need to design for change because any change can be effectively handled by refactoring the code [9]. Limitations of Agile Processes The assumptions listed above do not hold for all software development environments in general, nor for all “agile” processes in particular. This should not be surprising; none of the agile processes is a silver bullet (despite the enthusiastic claims of some its proponents). In this part we describe some of the situations in which agile processes may generally not be applicable. It is possible that some agile processes fit these assumptions better, while others may be able to be extended to address the limitations discussed here. Such extensions can involve incorporating principles and practices often associated with more predictive development practices into agile processes. 1. Limited support for distributed development",
"title": ""
},
{
"docid": "421a0d89557ea20216e13dee9db317ca",
"text": "Online advertising is progressively moving towards a programmatic model in which ads are matched to actual interests of individuals collected as they browse the web. Letting the huge debate around privacy aside, a very important question in this area, for which little is known, is: How much do advertisers pay to reach an individual?\n In this study, we develop a first of its kind methodology for computing exactly that - the price paid for a web user by the ad ecosystem - and we do that in real time. Our approach is based on tapping on the Real Time Bidding (RTB) protocol to collect cleartext and encrypted prices for winning bids paid by advertisers in order to place targeted ads. Our main technical contribution is a method for tallying winning bids even when they are encrypted. We achieve this by training a model using as ground truth prices obtained by running our own \"probe\" ad-campaigns. We design our methodology through a browser extension and a back-end server that provides it with fresh models for encrypted bids. We validate our methodology using a one year long trace of 1600 mobile users and demonstrate that it can estimate a user's advertising worth with more than 82% accuracy.",
"title": ""
},
{
"docid": "0d3e55a7029d084f6ba889b7d354411c",
"text": "Electrophysiological and computational studies suggest that nigro-striatal dopamine may play an important role in learning about sequences of environmentally important stimuli, particularly when this learning is based upon step-by-step associations between stimuli, such as in second-order conditioning. If so, one would predict that disruption of the midbrain dopamine system--such as occurs in Parkinson's disease--may lead to deficits on tasks that rely upon such learning processes. This hypothesis was tested using a \"chaining\" task, in which each additional link in a sequence of stimuli leading to reward is trained step-by-step, until a full sequence is learned. We further examined how medication (L-dopa) affects this type of learning. As predicted, we found that Parkinson's patients tested 'off' L-dopa performed as well as controls during the first phase of this task, when required to learn a simple stimulus-response association, but were impaired at learning the full sequence of stimuli. In contrast, we found that Parkinson's patients tested 'on' L-dopa performed better than those tested 'off', and no worse than controls, on all phases of the task. These findings suggest that the loss of dopamine that occurs in Parkinson's disease can lead to specific learning impairments that are predicted by electrophysiological and computational studies, and that enhancing dopamine levels with L-dopa alleviates this deficit. This last result raises questions regarding the mechanisms by which midbrain dopamine modulates learning in Parkinson's disease, and how L-dopa affects these processes.",
"title": ""
},
{
"docid": "0ed429c00611025e38ae996db0a06d23",
"text": "Intuitive predictions follow a judgmental heuristic—representativeness. By this heuristic, people predict the outcome that appears most representative of the evidence. Consequently, intuitive predictions are insensitive to the reliability of the evidence or to the prior probability of the outcome, in violation of the logic of statistical prediction. The hypothesis that people predict by representativeness is supported in a series of studies with both naive and sophisticated subjects. It is shown that the ranking of outcomes by likelihood coincides with their ranking by representativeness and that people erroneously predict rare events and extreme values if these happen to be representative. The experience of unjustified confidence in predictions and the prevalence of fallacious intuitions concerning statistical regression are traced to the representativeness heuristic. In this paper, we explore the rules that determine intuitive predictions and judgments of confidence and contrast these rules to the normative principles of statistical prediction. Two classes of prediction are discussed: category prediction and numerical prediction. In a categorical case, the prediction is given in nominal form, for example, the winner in an election, the diagnosis of a patient, or a person's future occupation. In a numerical case, the prediction is given in numerical form, for example, the future value of a particular stock or of a student's grade point average. In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and The present paper is concerned with the role of one of these heuristics—representa-tiveness—in intuitive predictions. Given specific evidence (e.g., a personality sketch), the outcomes under consideration (e.g., occupations or levels of achievement) can be ordered by the degree to which they are representative of that evidence. The thesis of this paper is that people predict by representativeness, that is, they select or order outcomes by the 237",
"title": ""
},
{
"docid": "9c857daee24f793816f1cee596e80912",
"text": "Introduction Since the introduction of a new UK Ethics Committee Authority (UKECA) in 2004 and the setting up of the Central Office for Research Ethics Committees (COREC), research proposals have come under greater scrutiny than ever before. The era of self-regulation in UK research ethics has ended (Kerrison and Pollock, 2005). The UKECA recognise various committees throughout the UK that can approve proposals for research in NHS facilities (National Patient Safety Agency, 2007), and the scope of research for which approval must be sought is defined by the National Research Ethics Service, which has superceded COREC. Guidance on sample size (Central Office for Research Ethics Committees, 2007: 23) requires that 'the number should be sufficient to achieve worthwhile results, but should not be so high as to involve unnecessary recruitment and burdens for participants'. It also suggests that formal sample estimation size should be based on the primary outcome, and that if there is more than one outcome then the largest sample size should be chosen. Sample size is a function of three factors – the alpha level, beta level and magnitude of the difference (effect size) hypothesised. Referring to the expected size of effect, COREC (2007: 23) guidance states that 'it is important that the difference is not unrealistically high, as this could lead to an underestimate of the required sample size'. In this paper, issues of alpha, beta and effect size will be considered from a practical perspective. A freely-available statistical software package called GPower (Buchner et al, 1997) will be used to illustrate concepts and provide practical assistance to novitiate researchers and members of research ethics committees. There are a wide range of freely available statistical software packages, such as PS (Dupont and Plummer, 1997) and STPLAN (Brown et al, 2000). Each has features worth exploring, but GPower was chosen because of its ease of use and the wide range of study designs for which it caters. Using GPower, sample size and power can be estimated or checked by those with relatively little technical knowledge of statistics. Alpha and beta errors and power Researchers begin with a research hypothesis – a 'hunch' about the way that the world might be. For example, that treatment A is better than treatment B. There are logical reasons why this can never be demonstrated as absolutely true, but evidence that it may or may not be true can be obtained by …",
"title": ""
},
{
"docid": "100da900b23fbf4a9645907d89d730af",
"text": "This paper describes the design and manufacturing of soft artificial skin with an array of embedded soft strain sensors for detecting various hand gestures by measuring joint motions of five fingers. The proposed skin was made of a hyperelastic elastomer material with embedded microchannels filled with two different liquid conductors, an ionic liquid and a liquid metal. The ionic liquid microchannels were used to detect the mechanical strain changes of the sensing material, and the liquid metal microchannels were used as flexible and stretchable electrical wires for connecting the sensors to an external control circuit. The two heterogeneous liquid conductors were electrically interfaced through flexible conductive threads to prevent the two liquid from being intermixed. The skin device was connected to a computer through a microcontroller instrumentation circuit for reconstructing the 3-D hand motions graphically. The paper also presents preliminary calibration and experimental results.",
"title": ""
},
{
"docid": "0e5187e6d72082618bd5bda699adab93",
"text": "Many applications of mobile deep learning, especially real-time computer vision workloads, are constrained by computation power. This is particularly true for workloads running on older consumer phones, where a typical device might be powered by a singleor dual-core ARMv7 CPU. We provide an open-source implementation and a comprehensive analysis of (to our knowledge) the state of the art ultra-low-precision (<4 bit precision) implementation of the core primitives required for modern deep learning workloads on ARMv7 devices, and demonstrate speedups of 4x-20x over our additional state-of-the-art float32 and int8 baselines.",
"title": ""
},
{
"docid": "2d7ff73a3fb435bd11633f650b23172e",
"text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.",
"title": ""
},
{
"docid": "a2514f994292481d0fe6b37afe619cb5",
"text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.",
"title": ""
},
{
"docid": "f5b72167077481ca04e339ad4dc4da3c",
"text": "We have implemented a MATLAB source code for VES forward modeling and its inversion using a genetic algorithm (GA) optimization technique. The codes presented here are applied to the Schlumberger electrode arrangement. In the forward modeling computation, we have developed code to generate theoretical apparent resistivity curves from a specified layered earth model. The input to this program consists of the number of layers, the layer resistivity and thickness. The output of this program is apparent resistivity versus electrode spacing incorporated in the inversion process as apparent resistivity data. For the inversion, we have developed a MATLAB code to invert (for layer resistivity and thickness) the apparent resistivity data by the genetic algorithm optimization technique. The code also has some function files involving the basic stages in the GA inversion. Our inversion procedure addressed calculates forward solutions from sets of random input, to find the apparent resistivity. Then, it evolves the models by better sets of inputs through processes that imitate natural mating, selection, crossover, and mutation in each generation. The aim of GA inversion is to find the best correlation between model and theoretical apparent resistivity curves. In this study, we present three synthetic examples that demonstrate the effectiveness and usefulness of this program. Our numerical modeling shows that the GA optimization technique can be applied for resolving layer parameters with reasonably low error values.",
"title": ""
},
{
"docid": "7bb1d856e5703afb571cf781d48ce403",
"text": "RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.",
"title": ""
},
{
"docid": "9871a5673f042b0565c50295be188088",
"text": "Formal security analysis has proven to be a useful tool for tracking modifications in communication protocols in an automated manner, where full security analysis of revisions requires minimum efforts. In this paper, we formally analysed prominent IoT protocols and uncovered many critical challenges in practical IoT settings. We address these challenges by using formal symbolic modelling of such protocols under various adversaries and security goals. Furthermore, this paper extends formal analysis to cryptographic Denial-of-Service (DoS) attacks and demonstrates that a vast majority of IoT protocols are vulnerable to such resource exhaustion attacks. We present a cryptographic DoS attack countermeasure that can be generally used in many IoT protocols. Our study of prominent IoT protocols such as CoAP and MQTT shows the benefits of our approach.",
"title": ""
},
{
"docid": "c03de8afcb5a6fce6c22e9394367f54d",
"text": "Thus the Gestalt domain with its three operations forms a general algebra. J. N. Wilson, Handbook of Computer Vision Algorithms in Image Algebra, 2nd ed. (1072), Computational Techniques and Algorithms for Image Processing (S. (1047), Universal Algebra and Coalgebra (Klaus Denecke, Shelly L. Wismath), World (986), Handbook of Mathematical Models in Computer Vision, (N. Paragios, (985), Numerical Optimization, second edition (Jorge Nocedal, Stephen J.",
"title": ""
}
] |
scidocsrr
|
52a8b82f35210c49548a141864212f1f
|
Broad Learning for Healthcare
|
[
{
"docid": "48c78545d402b5eed80e705feb45f8f2",
"text": "With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.",
"title": ""
},
{
"docid": "8622a61c6cc571688fb2b6e232ba0920",
"text": "The increasing use of electronic forms of communication presents new opportunities in the study of mental health, including the ability to investigate the manifestations of psychiatric diseases unobtrusively and in the setting of patients' daily lives. A pilot study to explore the possible connections between bipolar affective disorder and mobile phone usage was conducted. In this study, participants were provided a mobile phone to use as their primary phone. This phone was loaded with a custom keyboard that collected metadata consisting of keypress entry time and accelerometer movement. Individual character data with the exceptions of the backspace key and space bar were not collected due to privacy concerns. We propose an end-to-end deep architecture based on late fusion, named DeepMood, to model the multi-view metadata for the prediction of mood scores. Experimental results show that 90.31% prediction accuracy on the depression score can be achieved based on session-level mobile phone typing dynamics which is typically less than one minute. It demonstrates the feasibility of using mobile phone metadata to infer mood disturbance and severity.",
"title": ""
}
] |
[
{
"docid": "4711fe62beb8b1d3b50789bfa3d5dd06",
"text": "MOTIVATION\nCore sets are necessary to ensure that access to useful alleles or characteristics retained in genebanks is guaranteed. We have successfully developed a computational tool named 'PowerCore' that aims to support the development of core sets by reducing the redundancy of useful alleles and thus enhancing their richness.\n\n\nRESULTS\nThe program, using a new approach completely different from any other previous methodologies, selects entries of core sets by the advanced M (maximization) strategy implemented through a modified heuristic algorithm. The developed core set has been validated to retain all characteristics for qualitative traits and all classes for quantitative ones. PowerCore effectively selected the accessions with higher diversity representing the entire coverage of variables and gave a 100% reproducible list of entries whenever repeated.\n\n\nAVAILABILITY\nPowerCore software uses the .NET Framework Version 1.1 environment which is freely available for the MS Windows platform. The files can be downloaded from http://genebank.rda.go.kr/powercore/. The distribution of the package includes executable programs, sample data and a user manual.",
"title": ""
},
{
"docid": "580c53294eed52453db7534da5db4985",
"text": "Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.",
"title": ""
},
{
"docid": "fe7c53018830103e3ca0912e2f5c27df",
"text": "We investigate the effect of channel estimation error on the capacity of multiple input multiple output (MIMO) fading channels. We study lower and upper bounds of mutual information under channel estimation error, and show that the two bounds are tight for Gaussian inputs. Assuming Gaussian inputs we also derive tight lower bounds of ergodic and outage capacities and optimal transmitter power allocation strategies that achieve the bounds under perfect feedback. For the ergodic capacity, the optimal strategy is a modified waterfilling over the spatial (antenna) and temporal (fading) domains. This strategy is close to optimum under small feedback delays, but when the delay is large, equal powers should be allocated across spatial dimensions. For the outage capacity, the optimal scheme is a spatial waterfilling and temporal truncated channel inversion. Numerical results show that some capacity gain is obtained by spatial power allocation. Temporal power adaptation, on the other hand, gives negligible gain in terms of ergodic capacity, but greatly enhances outage performance. December 12, 2005 DRAFT",
"title": ""
},
{
"docid": "6b9d8ff2c31b672832e2a81fbbcde583",
"text": "ion in Rationale Models. The design goal of KBSA-ADM was to offer a coherent series of rationale models based on results of the REMAP project (Ramesh and Dhar 1992) for maintaining rationale at different levels of detail. Figure 19: Simple Rationale Model The model sketched in Figure 19 is used for capturing rationale at a simple level of detail. It links an OBJECT with its RATIONALE. The model in Figure 19 also provides for the explicit representation of ASSUMPTIONS and DEPENDENCIES among them. Thus, using this model, the assumptions providing justifications to the creation of objects can be explicitly identified and reasoned with. As changes in such assumptions are a primary factor in the",
"title": ""
},
{
"docid": "4284e9bbe3bf4c50f9e37455f1118e6b",
"text": "A longevity revolution (Butler, 2008) is occurring across the globe. Because of factors ranging from the reduction of early-age mortality to an increase in life expectancy at later ages, most of the world’s population is now living longer than preceding generations (Bengtson, 2014). There are currently more than 44 million older adults—typically defined as persons 65 years and older—living in the United States, and this number is expected to increase to 98 million by 2060 (Administration on Aging, 2016). Although most older adults report higher levels of life satisfaction than do younger or middle-aged adults (George, 2010), between 5.6 and 8 million older Americans have a diagnosable mental health or substance use disorder (Bartels & Naslund, 2013). Furthermore, because of the rapid growth of the older adult population, this figure is expected to nearly double by 2030 (Bartels & Naslund, 2013). Mental health care is effective for older adults, and evidence-based treatments exist to address a broad range of issues, including anxiety disorders, depression, sleep disturbances, substance abuse, and some symptoms of dementia (Myers & Harper, 2004). Counseling interventions may also be beneficial for nonclinical life transitions, such as coping with loss, adjusting to retirement and a reduced income, and becoming a grandparent (Myers & Harper, 2004). Yet, older adults are underserved when it comes to mental",
"title": ""
},
{
"docid": "b7b1153067a784a681f2c6d0105acb2a",
"text": "Investigations of the human connectome have elucidated core features of adult structural networks, particularly the crucial role of hub-regions. However, little is known regarding network organisation of the healthy elderly connectome, a crucial prelude to the systematic study of neurodegenerative disorders. Here, whole-brain probabilistic tractography was performed on high-angular diffusion-weighted images acquired from 115 healthy elderly subjects (age 76-94 years; 65 females). Structural networks were reconstructed between 512 cortical and subcortical brain regions. We sought to investigate the architectural features of hub-regions, as well as left-right asymmetries, and sexual dimorphisms. We observed that the topology of hub-regions is consistent with a young adult population, and previously published adult connectomic data. More importantly, the architectural features of hub connections reflect their ongoing vital role in network communication. We also found substantial sexual dimorphisms, with females exhibiting stronger inter-hemispheric connections between cingulate and prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized subnetworks consistent with the neural circuitry specialised for language and executive functions, whilst rightward subnetworks were dominant in visual and visuospatial streams. These findings provide insights into healthy brain ageing and provide a benchmark for the study of neurodegenerative disorders such as Alzheimer's disease (AD) and frontotemporal dementia (FTD).",
"title": ""
},
{
"docid": "9ce5d15c444d91f8db50a781f438fa29",
"text": "In this paper, we explore the relationship between Facebook users’ privacy concerns, relationship maintenance strategies, and social capital outcomes. Previous research has found a positive relationship between various measures of Facebook use and perceptions of social capital, i.e., one’s access to social and information-based resources. Other research has found that social network site users with high privacy concerns modify their disclosures on the site. However, no research to date has empirically tested how privacy concerns and disclosure strategies interact to influence social capital outcomes. To address this gap in the literature, we explored these questions with survey data (N=230). Findings indicate that privacy concerns and behaviors predict disclosures on Facebook, but not perceptions of social capital. In addition, when looking at predictors of social capital, we identify interaction effects between users’ network composition and their use of privacy features.",
"title": ""
},
{
"docid": "2cebd9275e30da41a97f6d77207cc793",
"text": "Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem’s structure to guarantee efficient learning.",
"title": ""
},
{
"docid": "69b9389893cc6b72c94d5c5b8ed940ae",
"text": "Due to the rapid growth of network infrastructure and sensor, the age of the IoT (internet of things) that can be implemented into the smart car, smart home, smart building, and smart city is coming. IoT is a very useful ecosystem that provides various services (e.g., amazon echo); however, at the same time, risk can be huge too. Collecting information to help people could lead serious information leakage, and if IoT is combined with critical control system (e.g., train control system), security attack would cause loss of lives. Furthermore, research on IoT security requirements is insufficient now. Therefore, this paper focuses on IoT security, and its requirements. First, we propose basic security requirements of IoT by analyzing three basic characteristics (i.e., heterogeneity, resource constraint, dynamic environment). Then, we suggest six key elements of IoT (i.e., IoT network, cloud, user, attacker, service, platform) and analyze their security issues for overall security requirements. In addition, we evaluate several IoT security requirement researches.",
"title": ""
},
{
"docid": "ae67aadc3cddd3642bf0a7f6336b9817",
"text": "To increase efficacy in traditional classroom courses as well as in Massive Open Online Courses (MOOCs), automated systems supporting the instructor are needed. One important problem is to automatically detect students that are going to do poorly in a course early enough to be able to take remedial actions. Existing grade prediction systems focus on maximizing the accuracy of the prediction while overseeing the importance of issuing timely and personalized predictions. This paper proposes an algorithm that predicts the final grade of each student in a class. It issues a prediction for each student individually, when the expected accuracy of the prediction is sufficient. The algorithm learns online what is the optimal prediction and time to issue a prediction based on past history of students' performance in a course. We derive a confidence estimate for the prediction accuracy and demonstrate the performance of our algorithm on a dataset obtained based on the performance of approximately 700 UCLA undergraduate students who have taken an introductory digital signal processing over the past seven years. We demonstrate that for 85% of the students we can predict with 76% accuracy whether they are going do well or poorly in the class after the fourth course week. Using data obtained from a pilot course, our methodology suggests that it is effective to perform early in-class assessments such as quizzes, which result in timely performance prediction for each student, thereby enabling timely interventions by the instructor (at the student or class level) when necessary.",
"title": ""
},
{
"docid": "80fe141d88740955f189e8e2bf4c2d89",
"text": "Predictions concerning development, interrelations, and possible independence of working memory, inhibition, and cognitive flexibility were tested in 325 participants (roughly 30 per age from 4 to 13 years and young adults; 50% female). All were tested on the same computerized battery, designed to manipulate memory and inhibition independently and together, in steady state (single-task blocks) and during task-switching, and to be appropriate over the lifespan and for neuroimaging (fMRI). This is one of the first studies, in children or adults, to explore: (a) how memory requirements interact with spatial compatibility and (b) spatial incompatibility effects both with stimulus-specific rules (Simon task) and with higher-level, conceptual rules. Even the youngest children could hold information in mind, inhibit a dominant response, and combine those as long as the inhibition required was steady-state and the rules remained constant. Cognitive flexibility (switching between rules), even with memory demands minimized, showed a longer developmental progression, with 13-year-olds still not at adult levels. Effects elicited only in Mixed blocks with adults were found in young children even in single-task blocks; while young children could exercise inhibition in steady state it exacted a cost not seen in adults, who (unlike young children) seemed to re-set their default response when inhibition of the same tendency was required throughout a block. The costs associated with manipulations of inhibition were greater in young children while the costs associated with increasing memory demands were greater in adults. Effects seen only in RT in adults were seen primarily in accuracy in young children. Adults slowed down on difficult trials to preserve accuracy; but the youngest children were impulsive; their RT remained more constant but at an accuracy cost on difficult trials. Contrary to our predictions of independence between memory and inhibition, when matched for difficulty RT correlations between these were as high as 0.8, although accuracy correlations were less than half that. Spatial incompatibility effects and global and local switch costs were evident in children and adults, differing only in size. Other effects (e.g., asymmetric switch costs and the interaction of switching rules and switching response-sites) differed fundamentally over age.",
"title": ""
},
{
"docid": "dbeb76c985630a733c3d1956119e88e2",
"text": "Electromagnetic signals of low frequency have been shown to be durably produced in aqueous dilutions of the Human Imunodeficiency Virus DNA. In vivo, HIV DNA signals are detected only in patients previously treated by antiretroviral therapy and having no detectable viral RNA copies in their blood. We suggest that the treatment of AIDS patients pushes the virus towards a new mode of replication implying only DNA, thus forming a reservoir insensitive to retroviral inhibitors. Implications for new approaches aimed at eradicating HIV infection are discussed.",
"title": ""
},
{
"docid": "7fed5f87d8f009dfd8f1b0143cdb3291",
"text": "There is no doubt that controlled and pulsatile drug delivery system is an important challenge in medicine over the conventional drug delivery system in case of therapeutic efficacy. However, the conventional drug delivery systems often offer a limited by their inability to drug delivery which consists of systemic toxicity, narrow therapeutic window, complex dosing schedule for long term treatment etc. Therefore, there has been a search for the drug delivery system that exhibit broad enhancing activity for more drugs with less complication. More recently, some elegant study has noted that, a new type of micro-electrochemical system or MEMS-based drug delivery systems called microchip has been improved to overcome the problems related to conventional drug delivery. Moreover, micro-fabrication technology has enabled to develop the implantable controlled released microchip devices with improved drug administration and patient compliance. In this article, we have presented an overview of the investigations on the feasibility and application of microchip as an advanced drug delivery system. Commercial manufacturing materials and methods, related other research works and current advancement of the microchips for controlled drug delivery have also been summarized.",
"title": ""
},
{
"docid": "f7bdf07ef7a45c3e261e4631743c1882",
"text": "Deep reinforcement learning (RL) methods have significant potential for dialogue policy optimisation. However, they suffer from a poor performance in the early stages of learning. This is especially problematic for on-line learning with real users. Two approaches are introduced to tackle this problem. Firstly, to speed up the learning process, two sampleefficient neural networks algorithms: trust region actor-critic with experience replay (TRACER) and episodic natural actorcritic with experience replay (eNACER) are presented. For TRACER, the trust region helps to control the learning step size and avoid catastrophic model changes. For eNACER, the natural gradient identifies the steepest ascent direction in policy space to speed up the convergence. Both models employ off-policy learning with experience replay to improve sampleefficiency. Secondly, to mitigate the cold start issue, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning. Combining these two approaches, we demonstrate a practical approach to learning deep RLbased dialogue policies and demonstrate their effectiveness in a task-oriented information seeking domain.",
"title": ""
},
{
"docid": "6df61e330f6b71c4ef136e3a2220a5e2",
"text": "In recent years, we have seen significant advancement in technologies to bring about smarter cities worldwide. The interconnectivity of things is the key enabler in these initiatives. An important building block is smart mobility, and it revolves around resolving land transport challenges in cities with dense populations. A transformative direction that global stakeholders are looking into is autonomous vehicles and the transport infrastructure to interconnect them to the traffic management system (that is, vehicle to infrastructure connectivity), as well as to communicate with one another (that is, vehicle to vehicle connectivity) to facilitate better awareness of road conditions. A number of countries had also started to take autonomous vehicles to the roads to conduct trials and are moving towards the plan for larger scale deployment. However, an important consideration in this space is the security of the autonomous vehicles. There has been an increasing interest in the attacks and defences of autonomous vehicles as these vehicles are getting ready to go onto the roads. In this paper, we aim to organize and discuss the various methods of attacking and defending autonomous vehicles, and propose a comprehensive attack and defence taxonomy to better categorize each of them. Through this work, we hope that it provides a better understanding of how targeted defences should be put in place for targeted attacks, and for technologists to be more mindful of the pitfalls when developing architectures, algorithms and protocols, so as to realise a more secure infrastructure composed of dependable autonomous vehicles.",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "f6ad0d01cb66c1260c1074c4f35808c6",
"text": "BACKGROUND\nUnilateral spatial neglect causes difficulty attending to one side of space. Various rehabilitation interventions have been used but evidence of their benefit is lacking.\n\n\nOBJECTIVES\nTo assess whether cognitive rehabilitation improves functional independence, neglect (as measured using standardised assessments), destination on discharge, falls, balance, depression/anxiety and quality of life in stroke patients with neglect measured immediately post-intervention and at longer-term follow-up; and to determine which types of interventions are effective and whether cognitive rehabilitation is more effective than standard care or an attention control.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched June 2012), MEDLINE (1966 to June 2011), EMBASE (1980 to June 2011), CINAHL (1983 to June 2011), PsycINFO (1974 to June 2011), UK National Research Register (June 2011). We handsearched relevant journals (up to 1998), screened reference lists, and tracked citations using SCISEARCH.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) of cognitive rehabilitation specifically aimed at spatial neglect. We excluded studies of general stroke rehabilitation and studies with mixed participant groups, unless more than 75% of their sample were stroke patients or separate stroke data were available.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected studies, extracted data, and assessed study quality. For subgroup analyses, review authors independently categorised the approach underlying the cognitive intervention as either 'top-down' (interventions that encourage awareness of the disability and potential compensatory strategies) or 'bottom-up' (interventions directed at the impairment but not requiring awareness or behavioural change, e.g. wearing prisms or patches).\n\n\nMAIN RESULTS\nWe included 23 RCTs with 628 participants (adding 11 new RCTs involving 322 new participants for this update). Only 11 studies were assessed to have adequate allocation concealment, and only four studies to have a low risk of bias in all categories assessed. Most studies measured outcomes using standardised neglect assessments: 15 studies measured effect on activities of daily living (ADL) immediately after the end of the intervention period, but only six reported persisting effects on ADL. One study (30 participants) reported discharge destination and one study (eight participants) reported the number of falls.Eighteen of the 23 included RCTs compared cognitive rehabilitation with any control intervention (placebo, attention or no treatment). Meta-analyses demonstrated no statistically significant effect of cognitive rehabilitation, compared with control, for persisting effects on either ADL (five studies, 143 participants) or standardised neglect assessments (eight studies, 172 participants), or for immediate effects on ADL (10 studies, 343 participants). In contrast, we found a statistically significant effect in favour of cognitive rehabilitation compared with control, for immediate effects on standardised neglect assessments (16 studies, 437 participants, standardised mean difference (SMD) 0.35, 95% confidence interval (CI) 0.09 to 0.62). However, sensitivity analyses including only studies of high methodological quality removed evidence of a significant effect of cognitive rehabilitation.Additionally, five of the 23 included RCTs compared one cognitive rehabilitation intervention with another. These included three studies comparing a visual scanning intervention with another cognitive rehabilitation intervention, and two studies (three comparison groups) comparing a visual scanning intervention plus another cognitive rehabilitation intervention with a visual scanning intervention alone. Only two small studies reported a measure of functional disability and there was considerable heterogeneity within these subgroups (I² > 40%) when we pooled standardised neglect assessment data, limiting the ability to draw generalised conclusions.Subgroup analyses exploring the effect of having an attention control demonstrated some evidence of a statistically significant difference between those comparing rehabilitation with attention control and those with another control or no treatment group, for immediate effects on standardised neglect assessments (test for subgroup differences, P = 0.04).\n\n\nAUTHORS' CONCLUSIONS\nThe effectiveness of cognitive rehabilitation interventions for reducing the disabling effects of neglect and increasing independence remains unproven. As a consequence, no rehabilitation approach can be supported or refuted based on current evidence from RCTs. However, there is some very limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. This emerging evidence justifies further clinical trials of cognitive rehabilitation for neglect. However, future studies need to have appropriate high quality methodological design and reporting, to examine persisting effects of treatment and to include an attention control comparator.",
"title": ""
},
{
"docid": "d396f95b96ba06154effb6df6991a092",
"text": "Wireless networks have become the main form of Internet access. Statistics show that the global mobile Internet penetration should exceed 70% until 2019. Wi-Fi is an important player in this change. Founded on IEEE 802.11, this technology has a crucial impact in how we share broadband access both in domestic and corporate networks. However, recent works have indicated performance issues in Wi-Fi networks, mainly when they have been deployed without planning and under high user density. Hence, different collision avoidance techniques and Medium Access Control protocols have been designed in order to improve Wi-Fi performance. Analyzing the collision problem, this work strengthens the claims found in the literature about the low Wi-Fi performance under dense scenarios. Then, in particular, this article overviews the MAC protocols used in the IEEE 802.11 standard and discusses solutions to mitigate collisions. Finally, it contributes presenting future trends in MAC protocols. This assists in foreseeing expected improvements for the next generation of Wi-Fi devices.",
"title": ""
},
{
"docid": "87b56bc0c5ebbddc283bb15067adf6e0",
"text": "in printed reviews, without the prior permission of the publisher. Semiotic Engineering Methods for Scientific Research in HCI. Clarisse Sieckenius de Souza. order to produce usable concepts, models and methods for the benefit of non-. research projects at the intersection between semiotics and HCI. In order to be of any scientific value, semiotic engineering must provide. www2.serg.inf.puc-rio.br/docs/MonteiroDeSouzaLeitao2013.pdf. To date, there hasnt been enough empirical research in HCI exploring this complex phenomenon. usable concepts, models and methods for the benefit of nonsemiotician [2]. In order to be of any scientific value, semiotic engineering must provide. Dept. of Computer Science/Center for Human Machine Interaction, University of. In my experience, semiotics can be useful for the HCI-field, but the purely analytic. cated techniques for montage — the temporal succession of shots — involving rhythm. But semiotic research need not to be confined to the user interface.",
"title": ""
},
{
"docid": "96607113a8b6d0ca1c043d183420996b",
"text": "Primary retroperitoneal masses include a diverse, and often rare, group of neoplastic and non-neoplastic entities that arise within the retroperitoneum but do not originate from any retroperitoneal organ. Their overlapping appearances on cross-sectional imaging may pose a diagnostic challenge to the radiologist; familiarity with characteristic imaging features, together with relevant clinical information, helps to narrow the differential diagnosis. In this article, a systematic approach to identifying and classifying primary retroperitoneal masses is described. The normal anatomy of the retroperitoneum is reviewed with an emphasis on fascial planes, retroperitoneal compartments, and their contents using cross-sectional imaging. Specific radiologic signs to accurately identify an intra-abdominal mass as primary retroperitoneal are presented, first by confirming the location as retroperitoneal and secondly by excluding an organ of origin. A differential diagnosis based on a predominantly solid or cystic appearance, including neoplastic and non-neoplastic entities, is elaborated. Finally, key diagnostic clues based on characteristic imaging findings are described, which help to narrow the differential diagnosis. This article provides a comprehensive overview of the cross-sectional imaging features of primary retroperitoneal masses, including normal retroperitoneal anatomy, radiologic signs of retroperitoneal masses and the differential diagnosis of solid and cystic, neoplastic and non-neoplastic retroperitoneal masses, with a view to assist the radiologist in narrowing the differential diagnosis.",
"title": ""
}
] |
scidocsrr
|
17114328ad9cd2ecf6b3aa46202bbf05
|
Sensor enabled wearable RFID technology for mitigating the risk of falls near beds
|
[
{
"docid": "7db00719532ab0d9b408d692171d908f",
"text": "The real-time monitoring of human movement can provide valuable information regarding an individual's degree of functional ability and general level of activity. This paper presents the implementation of a real-time classification system for the types of human movement associated with the data acquired from a single, waist-mounted triaxial accelerometer unit. The major advance proposed by the system is to perform the vast majority of signal processing onboard the wearable unit using embedded intelligence. In this way, the system distinguishes between periods of activity and rest, recognizes the postural orientation of the wearer, detects events such as walking and falls, and provides an estimation of metabolic energy expenditure. A laboratory-based trial involving six subjects was undertaken, with results indicating an overall accuracy of 90.8% across a series of 12 tasks (283 tests) involving a variety of movements related to normal daily activities. Distinction between activity and rest was performed without error; recognition of postural orientation was carried out with 94.1% accuracy, classification of walking was achieved with less certainty (83.3% accuracy), and detection of possible falls was made with 95.6% accuracy. Results demonstrate the feasibility of implementing an accelerometry-based, real-time movement classifier using embedded intelligence",
"title": ""
}
] |
[
{
"docid": "a1fb87b94d93da7aec13044d95ee1e44",
"text": "Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, “reinforced self-attention (ReSA)”, for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called “reinforced sequence sampling (RSS)”, selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, “reinforced self-attention network (ReSAN)”, solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.",
"title": ""
},
{
"docid": "13bfce7105cab1e4ea01fe94d04bcb97",
"text": "Recent years have seen a steady rise in the incidence of cutaneous malignant melanoma worldwide. Although it is now appreciated that the key to understanding the process by which melanocytes are transformed into malignant melanoma lies in the interplay between genetic factors and the ultraviolet (UV) spectrum of sunlight, the nature of this relation has remained obscure. Recently, prospects for elucidating the molecular mechanisms underlying such gene–environment interactions have brightened considerably through the development of UV-responsive experimental animal models of melanoma. Genetically engineered mice and human skin xenografts constitute novel platforms upon which to build studies designed to elucidate the pathogenesis of UV-induced melanomagenesis. The future refinement of these in vivo models should provide a wealth of information on the cellular and genetic targets of UV, the pathways responsible for the repair of UV-induced DNA damage, and the molecular interactions between melanocytes and other skin cells in response to UV. It is anticipated that exploitation of these model systems will contribute significantly toward the development of effective approaches to the prevention and treatment of melanoma.",
"title": ""
},
{
"docid": "e48903be16ccab7bf1263e0a407e5d66",
"text": "This research applies Lotka’s Law to metadata on open source software development. Lotka’s Law predicts the proportion of authors at different levels of productivity. Open source software development harnesses the creativity of thousands of programmers worldwide, is important to the progress of the Internet and many other computing environments, and yet has not been widely researched. We examine metadata from the Linux Software Map (LSM), which documents many open source projects, and Sourceforge, one of the largest resources for open source developers. Authoring patterns found are comparable to prior studies of Lotka’s Law for scientific and scholarly publishing. Lotka’s Law was found to be effective in understanding software development productivity patterns, and offer promise in predicting aggregate behavior of open source developers.",
"title": ""
},
{
"docid": "bf5f53216163a3899cc91af060375250",
"text": "Received Feb 13, 2018 Revised Apr 18, 2018 Accepted May 21, 2018 One of the biomedical image problems is the appearance of the bubbles in the slide that could occur when air passes through the slide during the preparation process. These bubbles may complicate the process of analysing the histopathological images. Aims: The objective of this study is to remove the bubble noise from the histopathology images, and then predict the tissues that underlie it using the fuzzy controller in cases of remote pathological diagnosis. Methods: Fuzzy logic uses the linguistic definition to recognize the relationship between the input and the activity, rather than using difficult numerical equation. Mainly there are five parts, starting with accepting the image, passing through removing the bubbles, and ending with predict the tissues. These were implemented by defining membership functions between colours range using MATLAB. Results: 50 histopathological images were tested on four types of membership functions (MF); the results show that (nine-triangular) MF get 75.4% correctly predicted pixels versus 69.1, 72.31 and 72% for (five-triangular), (five-Gaussian) and (nine-Gaussian) respectively. Conclusions: In line with the era of digitally driven epathology, this process is essentially recommended to ensure quality interpretation and analyses of the processed slides; thus overcoming relevant limitations. Keyword:",
"title": ""
},
{
"docid": "631b473342cc30360626eaea0734f1d8",
"text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.",
"title": ""
},
{
"docid": "3685470e05a3f763817b9c6f28747336",
"text": "G' A. Linz, Peter. An introduction to formal languages and automata / Peter Linz'--3'd cd charrgcs ftrr the second edition wercl t)volutionary rather than rcvolrrtionary and addressed Initially, I felt that giving solutions to exercises was undesirable hecause it lirrritcd the Chapter 1 fntroduction to the Theory of Computation. Issuu solution manual to introduction to languages. Introduction theory computation 2nd edition solution manual sipser. Structural Theory of automata: solution manual of theory of computation. Kellison theory of interest pdf. Transformation, Sylvester's theorem(without proof), Solution of Second Order. Linear Differential Higher Engineering Mathematics by B.S. Grewal, 40th Edition, Khanna. Publication. 2. Introduction Of Automata Theory, Languages and computationHopcroft. Motwani&Ulman UNIX system Utilities manual. 4.",
"title": ""
},
{
"docid": "9766b4967a85fec75ea6b89de7268f6f",
"text": "Smartphones are becoming more and more popular and, as a consequence, malware writers are increasingly engaged to develop new threats and propagate them through official and third-party markets. In addition to the propagation vectors, malware is also evolving quickly the techniques adopted for infecting victims and hiding their malicious nature to antimalware scanning. From SMS Trojans to legitimate applications repacked with malicious payload, from AES encrypted root exploits to the dynamic loading of a payload retrieved from a remote server: malicious code is becoming more and more hard to detect. In this paper we experimentally evaluate two techniques for detecting Android malware: the first one is based on Hidden Markov Model, while the second one exploits structural entropy. These two techniques have been successfully applied to detect PCs viruses in previous works, and only one work in literature analyzes the application of HMM to the detection of Android malware. We demonstrate that these methods, which reveal effective for PCs virus, are also successful for detecting and classifying mobile malware. Our results are promising: we obtain a precision of 0.96 to discriminate a malware application, and a precision of 0.978 to identify the malware family. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b109db8e315d904901021224745c9e26",
"text": "IP lookup and routing table update affect the speed at which a router forwards packets. This study proposes a new data structure for dynamic router tables used in IP lookup and update, called the Multi-inherited Search Tree (MIST). Partitioning each prefix according to an index value and removing the relationships among prefixes enables performing IP lookup operations efficiently. Because a prefix trie is used as a substructure, memory can be consumed and dynamic router-table operations can be performed efficiently. Experiments using real IPv4 routing databases indicated that the MIST uses memory efficiently and performs lookup, insert, and delete operations effectively.",
"title": ""
},
{
"docid": "146b5beb0c82f230a6896599269c5b81",
"text": "The link between the built environment and human behavior has long been of interest to the field of urban planning, but direct assessments of the links between the built environment and physical activity as it influences personal health are still rare in the field. Yet the concepts, theories, and methods used by urban planners provide a foundation for an emerging body of research on the relationship between the built environment and physical activity. Recent research efforts in urban planning have focused on the idea that land use and design policies can be used to increase transit use as well as walking and bicycling. The development of appropriate measures for the built environment and for travel behavior is an essential element of this research. The link between the built environment and travel behavior is then made using theoretical frameworks borrowed from economics, and in particular, the concept of travel as a derived demand. The available evidence lends itself to the argument that a combination of urban design, land use patterns, and transportation systems that promotes walking and bicycling will help create active, healthier, and more livable communities. To provide more conclusive evidence, however, researchers must address the following issues: An alternative to the derived-demand framework must be developed for walking, measures of the built environment must be refined, and more-complete data on walking must be developed. In addition, detailed data on the built environment must be spatially matched to detailed data on travel behavior.",
"title": ""
},
{
"docid": "d820fa9b47c51d4ab35fc8ebbe4a5ba7",
"text": "In current era of modernization and globalization it is observed that individuals are in the quest for mental peace and spiritual comfort even though they have achieved many scientific advancements. The major reason behind this uncomfortable condition is that they have indulged in exterior world and became careless about the religion and its practices. The most alarming fact regarding faith inculcation is the transformation of wrong, vague and ambiguous concepts to children at early ages which prolongs for whole life. Childhood is the period when concepts of right and wrong are strongly developed and most important agent that contributes to this concept making is parents. Keeping in mind this fact, the present study has highlighted the role of family in providing religious and moral values to their children. The qualitative approach has been used by adopting purposive sampling method. The focus group discussion has been conducted with families having an urban background. Some surprising facts has led the researcher to the conclusion of deterioration in the family as the social institution as a major cause which is resulted The Role of Family in Teaching Religious Moral Values to their Children 259 into not only a moral decay in the society but also the reason of socio-economic problems in country.",
"title": ""
},
{
"docid": "6a79ac3770966913cbfadf47a700a4c7",
"text": "In this paper, we introduce an alternative approach to Temporal Answer Set Programming that relies on a variation of Temporal Equilibrium Logic (TEL) for finite traces. This approach allows us to even out the expressiveness of TEL over infinite traces with the computational capacity of (incremental) Answer Set Programming (ASP). Also, we argue that finite traces are more natural when reasoning about action and change. As a result, our approach is readily implementable via multi-shot ASP systems and benefits from an extension of ASP’s full-fledged input language with temporal operators. This includes future as well as past operators whose combination offers a rich temporal modeling language. For computation, we identify the class of temporal logic programs and prove that it constitutes a normal form for our approach. Finally, we outline two implementations, a generic one and an extension of the ASP system clingo. Under consideration for publication in Theory and Practice of Logic Programming (TPLP)",
"title": ""
},
{
"docid": "9ed575b1ae41ddab041adeaf14e90735",
"text": "This paper presents a semi-autonomous controller for integrated design of an active safety system. A model of the driver’s nominal behavior is estimated based on observed behavior. A nonlinear model of the vehicle is developed that utilizes a coordinate transformation which allows for obstacles and road bounds to be modeled as constraints while still allowing the controller full steering and braking authority. A Nonlinear Model Predictive Controller (NMPC) is designed that utilizes the vehicle and driver models to predict a future threat of collision or roadway departure. Simulations are presented which demonstrate the ability of the suggested approach to successfully avoid multiple obstacles while staying safely within the road bounds.",
"title": ""
},
{
"docid": "9f005054e640c2db97995c7540fe2034",
"text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.",
"title": ""
},
{
"docid": "a559652585e2df510c1dd060cdf65ead",
"text": "Experience replay is an important technique for addressing sample-inefficiency in deep reinforcement learning (RL), but faces difficulty in learning from binary and sparse rewards due to disproportionately few successful experiences in the replay buffer. Hindsight experience replay (HER) (Andrychowicz et al. 2017) was recently proposed to tackle this difficulty by manipulating unsuccessful transitions, but in doing so, HER introduces a significant bias in the replay buffer experiences and therefore achieves a suboptimal improvement in sample-efficiency. In this paper, we present an analysis on the source of bias in HER, and propose a simple and effective method to counter the bias, to most effectively harness the sample-efficiency provided by HER. Our method, motivated by counter-factual reasoning and called ARCHER, extends HER with a trade-off to make rewards calculated for hindsight experiences numerically greater than real rewards. We validate our algorithm on two continuous control environments from DeepMind Control Suite (Tassa et al. 2018) Reacher and Finger, which simulate manipulation tasks with a robotic arm in combination with various reward functions, task complexities and goal sampling strategies. Our experiments consistently demonstrate that countering bias using more aggressive hindsight rewards increases sample efficiency, thus establishing the greater benefit of ARCHER in RL applications with limited computing budget.",
"title": ""
},
{
"docid": "2c222bb815ca26240e72072e5c9a1d42",
"text": "Novelty search is a state-of-the-art evolutionary approach that promotes behavioural novelty instead of pursuing a static objective. Along with a large number of successful applications, many different variants of novelty search have been proposed. It is still unclear, however, how some key parameters and algorithmic components influence the evolutionary dynamics and performance of novelty search. In this paper, we conduct a comprehensive empirical study focused on novelty search's algorithmic components. We study the \"k\" parameter -- the number of nearest neighbours used in the computation of novelty scores; the use and function of an archive; how to combine novelty search with fitness-based evolution; and how to configure the mutation rate of the underlying evolutionary algorithm. Our study is conducted in a simulated maze navigation task. Our results show that the configuration of novelty search can have a significant impact on performance and behaviour space exploration. We conclude with a number of guidelines for the implementation and configuration of novelty search, which should help future practitioners to apply novelty search more effectively.",
"title": ""
},
{
"docid": "1a9d276c4571419e0d1b297f248d874d",
"text": "Organizational culture plays a critical role in the acceptance and adoption of agile principles by a traditional software development organization (Chan & Thong, 2008). Organizations must understand the differences that exist between traditional software development principles and agile principles. Based on an analysis of the literature published between 2003 and 2010, this study examines nine distinct organizational cultural factors that require change, including management style, communication, development team practices, knowledge management, and customer interactions.",
"title": ""
},
{
"docid": "b5cb64a0a17954310910d69c694ad786",
"text": "This paper proposes a hybrid of handcrafted rules and a machine learning method for chunking Korean. In the partially free word-order languages such as Korean and Japanese, a small number of rules dominate the performance due to their well-developed postpositions and endings. Thus, the proposed method is primarily based on the rules, and then the residual errors are corrected by adopting a memory-based machine learning method. Since the memory-based learning is an efficient method to handle exceptions in natural language processing, it is good at checking whether the estimates are exceptional cases of the rules and revising them. An evaluation of the method yields the improvement in F-score over the rules or various machine learning methods alone.",
"title": ""
},
{
"docid": "e2741784e6207b58238f2f7a34057b17",
"text": "Although Faster R-CNN based approaches have achieved promising results for text detection, their localization accuracy is not satisfactory in certain cases. In this paper, we propose to use a LocNet to improve the localization accuracy of a Faster R-CNN based text detector. Given a proposal generated by region proposal network (RPN), instead of predicting directly the bounding box coordinates of the concerned text instance, the proposal is enlarged to create a search region so that conditional probabilities to each row and column of this search region can be assigned, which are then used to infer accurately the concerned bounding box. Experiments demonstrate that the proposed approach boosts the localization accuracy for Faster R-CNN based text detection significantly. Consequently, our new text detector has achieved superior performance on ICDAR-2011, ICDAR-2013 and MULTILIGUL text detection benchmark tasks.",
"title": ""
},
{
"docid": "3692954147d1a60fb683001bd379047f",
"text": "OBJECTIVE\nThe current study aimed to compare the Philadelphia collar and an open-design cervical collar with regard to user satisfaction and cervical range of motion in asymptomatic adults.\n\n\nDESIGN\nSeventy-two healthy subjects (36 women, 36 men) aged 18 to 29 yrs were recruited for this study. Neck movements, including active flexion, extension, right/left lateral flexion, and right/left axial rotation, were assessed in each subject under three conditions--without wearing a collar and while wearing two different cervical collars--using a dual digital inclinometer. Subject satisfaction was assessed using a five-item self-administered questionnaire.\n\n\nRESULTS\nBoth Philadelphia and open-design collars significantly reduced cervical motions (P < 0.05). Compared with the Philadelphia collar, the open-design collar more greatly reduced cervical motions in three planes and the differences were statistically significant except for limiting flexion. Satisfaction scores for Philadelphia and open-design collars were 15.89 (3.87) and 19.94 (3.11), respectively.\n\n\nCONCLUSION\nBased on the data of the 72 subjects presented in this study, the open-design collar adequately immobilized the cervical spine as a semirigid collar and was considered cosmetically acceptable, at least for subjects aged younger than 30 yrs.",
"title": ""
}
] |
scidocsrr
|
db60fe1dbf57aaa238eeea5f571252ae
|
Discriminative acoustic word embeddings: Tecurrent neural network-based approaches
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
}
] |
[
{
"docid": "0e57945ae40e8c0f08e92396c2592a78",
"text": "Frequent or contextually predictable words are often phonetically reduced, i.e. shortened and produced with articulatory undershoot. Explanations for phonetic reduction of predictable forms tend to take one of two approaches: Intelligibility-based accounts hold that talkers maximize intelligibility of words that might otherwise be difficult to recognize; production-based accounts hold that variation reflects the speed of lexical access and retrieval in the language production system. Here we examine phonetic variation as a function of phonological neighborhood density, capitalizing on the fact that words from dense phonological neighborhoods tend to be relatively difficult to recognize, yet easy to produce. We show that words with many phonological neighbors tend to be phonetically reduced (shortened in duration and produced with more centralized vowels) in connected speech, when other predictors of phonetic variation are brought under statistical control. We argue that our findings are consistent with the predictions of production-based accounts of pronunciation variation. 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d99747fb44a839a2ab8765c1176e4c77",
"text": "The aim of this paper is to explore text topic influence in authorship attribution. Specifically, we test the widely accepted belief that stylometric variables commonly used in authorship attribution are topic-neutral and can be used in multi-topic corpora. In order to investigate this hypothesis, we created a special corpus, which was controlled for topic and author simultaneously. The corpus consists of 200 Modern Greek newswire articles written by two authors in two different topics. Many commonly used stylometric variables were calculated and for each one we performed a two-way ANOVA test, in order to estimate the main effects of author, topic and the interaction between them. The results showed that most of the variables exhibit considerable correlation with the text topic and their exploitation in authorship analysis should be done with caution.",
"title": ""
},
{
"docid": "4cdef79370abcd380357c8be92253fa5",
"text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.",
"title": ""
},
{
"docid": "b4a18acfe5152bc1159aa23302d07c10",
"text": "Reading involves an interactive process in which the reader actively produces meaning through a set of mental processes. There is obviously an ongoing interaction between the reader and the text. Critical reading is related to thinking and that is why we cannot read without thinking. Critical reading involves the following skills: predicting, acknowledging, comparing, evaluating and decision-making. Schemata can be seen as the organized background knowledge, which leads the reader to expect and predict aspects in their interpretation of discourse.",
"title": ""
},
{
"docid": "dd4a53afc6af03fc323139b29dc024c5",
"text": "Log management and log auditing have become increasingly crucial for enterprises in this era of information and technology explosion. The log analysis technique is useful for discovering possible problems in business processes and preventing illegal-intrusion attempts and data-tampering attacks. Because of the complexity of the dynamically changing environment, auditing a tremendous number of data is a challenging issue. We provide a real-time audit mechanism to improve the aforementioned problems in log auditing. This mechanism was developed based on the Lempel-Ziv-Welch (LZW) compression technique to facilitate effective compression and provide reliable auditing log entries. The mechanism can be used to predict unusual activities when compressing the log data according to pre-defined auditing rules. Auditors using real-time and continuous monitoring can perceive instantly the most likely anomalies or exceptions that could cause problems. We also designed a user interface that allows auditors to define the various compression and audit parameters, using real log cases in the experiment to verify the feasibility and effectiveness of this proposed audit mechanism. In summary, this mechanism changes the log access method and improves the efficiency of log analysis. This mechanism greatly simplifies auditing so that auditors must only trace the sources and causes of the problems related to the detected anomalies. This greatly reduces the processing time of analytical audit procedures and the manual checking time, and improves the log audit efficiency.",
"title": ""
},
{
"docid": "58a016629de2a2556fae9ca3fa81040a",
"text": "This paper studies a type of image priors that are constructed implicitly through the alternating direction method of multiplier (ADMM) algorithm, called the algorithm-induced prior. Different from classical image priors which are defined before running the reconstruction algorithm, algorithm-induced priors are defined by the denoising procedure used to replace one of the two modules in the ADMM algorithm. Since such prior is not explicitly defined, analyzing the performance has been difficult in the past. Focusing on the class of symmetric smoothing filters, this paper presents an explicit expression of the prior induced by the ADMM algorithm. The new prior is reminiscent to the conventional graph Laplacian but with stronger reconstruction performance. It can also be shown that the overall reconstruction has an efficient closed-form implementation if the associated symmetric smoothing filter is low rank. The results are validated with experiments on image inpainting.",
"title": ""
},
{
"docid": "3afba1b1120923d28ab3d1dd6c79945e",
"text": "Signal processing on antenna arrays has received much recent attention in the mobile and wireless networking research communities, with array signal processing approaches addressing the problems of human movement detection, indoor mobile device localization, and wireless network security. However, there are two important challenges inherent in the design of these systems that must be overcome if they are to be of practical use on commodity hardware. First, phase differences between the radio oscillators behind each antenna can make readings unusable, and so must be corrected in order for most techniques to yield high-fidelity results. Second, while the number of antennas on commodity access points is usually limited, most array processing increases in fidelity with more antennas. These issues work in synergistic opposition to array processing: without phase offset correction, no phase-difference array processing is possible, and with fewer antennas, automatic correction of these phase offsets becomes even more challenging. We present Phaser, a system that solves these intertwined problems to make phased array signal processing truly practical on the many WiFi access points deployed in the real world. Our experimental results on three- and five-antenna 802.11-based hardware show that 802.11 NICs can be calibrated and synchronized to a 20° median phase error, enabling inexpensive deployment of numerous phase-difference based spectral analysis techniques previously only available on costly, special-purpose hardware.",
"title": ""
},
{
"docid": "22ad829acba8d8a0909f2b8e31c1f0c3",
"text": "Covariance matrices capture correlations that are invaluable in modeling real-life datasets. Using all d elements of the covariance (in d dimensions) is costly and could result in over-fitting; and the simple diagonal approximation can be over-restrictive. In this work, we present a new model, the Low-Rank Gaussian Mixture Model (LRGMM), for modeling data which can be extended to identifying partitions or overlapping clusters. The curse of dimensionality that arises in calculating the covariance matrices of the GMM is countered by using low-rank perturbed diagonal matrices. The efficiency is comparable to the diagonal approximation, yet one can capture correlations among the dimensions. Our experiments reveal the LRGMM to be an efficient and highly applicable tool for working with large high-dimensional datasets.",
"title": ""
},
{
"docid": "8db59f20491739420d9b40311705dbf1",
"text": "With object-oriented programming languages, Object Relational Mapping (ORM) frameworks such as Hibernate have gained popularity due to their ease of use and portability to different relational database management systems. Hibernate implements the Java Persistent API, JPA, and frees a developer from authoring software to address the impedance mismatch between objects and relations. In this paper, we evaluate the performance of Hibernate by comparing it with a native JDBC implementation using a benchmark named BG. BG rates the performance of a system for processing interactive social networking actions such as view profile, extend an invitation from one member to another, and other actions. Our key findings are as follows. First, an object-oriented Hibernate implementation of each action issues more SQL queries than its JDBC counterpart. This enables the JDBC implementation to provide response times that are significantly faster. Second, one may use the Hibernate Query Language (HQL) to refine the object-oriented Hibernate implementation to provide performance that approximates the JDBC implementation.",
"title": ""
},
{
"docid": "87e44334828cd8fd1447ab5c1b125ab3",
"text": "the guidance system. The types of steering commands vary depending on the phase of flight and the type of interceptor. For example, in the boost phase the flight control system may be designed to force the missile to track a desired flight-path angle or attitude. In the midcourse and terminal phases the system may be designed to track acceleration commands to effect an intercept of the target. This article explores several aspects of the missile flight control system, including its role in the overall missile system, its subsystems, types of flight control systems, design objectives, and design challenges. Also discussed are some of APL’s contributions to the field, which have come primarily through our role as Technical Direction Agent on a variety of Navy missile programs. he flight control system is a key element that allows the missile to meet its system performance requirements. The objective of the flight control system is to force the missile to achieve the steering commands developed by",
"title": ""
},
{
"docid": "0e600cedfbd143fe68165e20317c46d4",
"text": "We propose an efficient real-time automatic license plate recognition (ALPR) framework, particularly designed to work on CCTV video footage obtained from cameras that are not dedicated to the use in ALPR. At present, in license plate detection, tracking and recognition are reasonably well-tackled problems with many successful commercial solutions being available. However, the existing ALPR algorithms are based on the assumption that the input video will be obtained via a dedicated, high-resolution, high-speed camera and is/or supported by a controlled capture environment, with appropriate camera height, focus, exposure/shutter speed and lighting settings. However, typical video forensic applications may require searching for a vehicle having a particular number plate on noisy CCTV video footage obtained via non-dedicated, medium-to-low resolution cameras, working under poor illumination conditions. ALPR in such video content faces severe challenges in license plate localization, tracking and recognition stages. This paper proposes a novel approach for efficient localization of license plates in video sequence and the use of a revised version of an existing technique for tracking and recognition. A special feature of the proposed approach is that it is intelligent enough to automatically adjust for varying camera distances and diverse lighting conditions, a requirement for a video forensic tool that may operate on videos obtained by a diverse set of unspecified, distributed CCTV cameras.",
"title": ""
},
{
"docid": "eb639439559f3e4e3540e3e98de7a741",
"text": "This paper presents a deformable model for automatically segmenting brain structures from volumetric magnetic resonance (MR) images and obtaining point correspondences, using geometric and statistical information in a hierarchical scheme. Geometric information is embedded into the model via a set of affine-invariant attribute vectors, each of which characterizes the geometric structure around a point of the model from a local to a global scale. The attribute vectors, in conjunction with the deformation mechanism of the model, warrant that the model not only deforms to nearby edges, as is customary in most deformable surface models, but also that it determines point correspondences based on geometric similarity at different scales. The proposed model is adaptive in that it initially focuses on the most reliable structures of interest, and gradually shifts focus to other structures as those become closer to their respective targets and, therefore, more reliable. The proposed techniques have been used to segment boundaries of the ventricles, the caudate nucleus, and the lenticular nucleus from volumetric MR images.",
"title": ""
},
{
"docid": "c533f121483bfd8de0cf20c319af5ff1",
"text": "This article revisits the concept of biologic width, in particular its clinical consequences for treatment options and decisions in light of modern dentistry approaches such as biomimetics and minimally invasive procedures. In the past, due to the need to respect biologic width, clinicians were used to removing periodontal tissue around deep cavities, bone, and gum so that the limits of restorations were placed far away from the epithelium and connective attachments, in order to prevent tissue loss, root exposure, opening of the proximal area (leading to black holes), and poor esthetics. Furthermore, no material was placed subgingivally in case it led to periodontal inflammation and attachment loss. Today, with the more conservative approach to restorative dentistry, former subtractive procedures are being replaced with additive ones. In view of this, one could propose deep margin elevation (DME) instead of crown lengthening as a change of paradigm for deep cavities. The intention of this study was to overview the literature in search of scientific evidence regarding the consequences of DME with different materials, particularly on the surrounding periodontium, from a clinical and histologic point of view. A novel approach is to extrapolate results obtained during root coverage procedures on restored roots to hypothesize the nature of the healing of proximal attachment tissue on a proper bonded material during a DME. Three clinical cases presented here illustrate these procedures. The hypothesis of this study was that even though crown lengthening is a valuable procedure, its indications should decrease in time, given that DME, despite being a very demanding procedure, seems to be well tolerated by the surrounding periodontium, clinically and histologically.",
"title": ""
},
{
"docid": "b181d6fd999fdcd8c5e5b52518998175",
"text": "Hydrogels are used to create 3D microenvironments with properties that direct cell function. The current study demonstrates the versatility of hyaluronic acid (HA)-based hydrogels with independent control over hydrogel properties such as mechanics, architecture, and the spatial distribution of biological factors. Hydrogels were prepared by reacting furan-modified HA with bis-maleimide-poly(ethylene glycol) in a Diels-Alder click reaction. Biomolecules were photopatterned into the hydrogel by two-photon laser processing, resulting in spatially defined growth factor gradients. The Young's modulus was controlled by either changing the hydrogel concentration or the furan substitution on the HA backbone, thereby decoupling the hydrogel concentration from mechanical properties. Porosity was controlled by cryogelation, and the pore size distribution, by the thaw temperature. The addition of galactose further influenced the porosity, pore size, and Young's modulus of the cryogels. These HA-based hydrogels offer a tunable platform with a diversity of properties for directing cell function, with applications in tissue engineering and regenerative medicine.",
"title": ""
},
{
"docid": "121a388391c12de1329e74fdeebdaf10",
"text": "In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victim's valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victim's Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a user's historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.",
"title": ""
},
{
"docid": "c4cfd9364c271e0af23a03c28f5c95ad",
"text": "Due to the different posture and view angle, the image will appear some objects that do not exist in another image of the same person captured by another camera. The region covered by new items adversely improved the difficulty of person re-identification. Therefore, we named these regions as Damaged Region (DR). To overcome the influence of DR, we propose a new way to extract feature based on the local region that divides both in the horizontal and vertical directions. Before splitting the image, we enlarge it with direction to increase the useful information, potentially reducing the impact of different viewing angles. Then each divided region is a separated part, and the results of the adjacent regions will be compared. As a result the region that gets a higher score is selected as the valid one, and which gets the lower score caused by pose variation and items occlusion will be invalid. Extensive experiments carried out on three person re-identification benchmarks, including VIPeR, PRID2011, CUHK01, clearly show the significant and consistent improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "425cc43b8e8199ba10fde09f7e237b70",
"text": "Recently business intelligence (BI) applications have been the primary agenda for many CIOs. However, the concept of BI is fairly new and to date there is no commonly agreed definition of BI. This paper explores the nebulous definitions and the various applications of BI through a comprehensive review of academic as well as practitioner’s literature. As a result, three main perspectives of BI have been identified, namely the management aspect, the technological aspect, and the product aspect. This categorization gives researchers, practitioners, and BI vendors a better idea of how different parties have approached BI thus far and is valuable in their, design, planning, and implementation of a contemporary BI system in the future. The categorization may even be a first effort towards a commonly agreed definition of BI.",
"title": ""
},
{
"docid": "57c6d587b602b17a3cbf3b9b3c72c6c9",
"text": "OBJECTIVE\nDevelopment of a rational and enforceable basis for controlling the impact of cannabis use on traffic safety.\n\n\nMETHODS\nAn international working group of experts on issues related to drug use and traffic safety evaluated evidence from experimental and epidemiological research and discussed potential approaches to developing per se limits for cannabis.\n\n\nRESULTS\nIn analogy to alcohol, finite (non-zero) per se limits for delta-9-tetrahydrocannabinol (THC) in blood appear to be the most effective approach to separating drivers who are impaired by cannabis use from those who are no longer under the influence. Limited epidemiological studies indicate that serum concentrations of THC below 10 ng/ml are not associated with an elevated accident risk. A comparison of meta-analyses of experimental studies on the impairment of driving-relevant skills by alcohol or cannabis suggests that a THC concentration in the serum of 7-10 ng/ml is correlated with an impairment comparable to that caused by a blood alcohol concentration (BAC) of 0.05%. Thus, a suitable numerical limit for THC in serum may fall in that range.\n\n\nCONCLUSIONS\nThis analysis offers an empirical basis for a per se limit for THC that allows identification of drivers impaired by cannabis. The limited epidemiological data render this limit preliminary.",
"title": ""
},
{
"docid": "53749eab6b23c026f9cb3b37a7f639f3",
"text": "This article presents a dual system model (DSM) of decision making under risk and uncertainty according to which the value of a gamble is a combination of the values assigned to it independently by the affective and deliberative systems. On the basis of research on dual process theories and empirical research in Hsee and Rottenstreich (2004) and Rottenstreich and Hsee (2001) among others, the DSM incorporates (a) individual differences in disposition to rational versus emotional decision making, (b) the affective nature of outcomes, and (c) different task construals within its framework. The model has good descriptive validity and accounts for (a) violation of nontransparent stochastic dominance, (b) fourfold pattern of risk attitudes, (c) ambiguity aversion, (d) common consequence effect, (e) common ratio effect, (f) isolation effect, and (g) coalescing and event-splitting effects. The DSM is also used to make several novel predictions of conditions under which specific behavior patterns may or may not occur.",
"title": ""
},
{
"docid": "cfcc5b98ebebe08475d68667aacaf46f",
"text": "Sequence alignment is an important task in bioinformatics which involves typical database search where data is in the form of DNA, RNA or protein sequence. For alignment various methods have been devised starting from pairwise alignment to multiple sequence alignment (MSA). To perform multiple sequence alignment various methods exists like progressive, iterative and concepts of dynamic programming in which we use Needleman Wunsch and Smith Waterman algorithms. This paper discusses various sequence alignment methods including their advantages and disadvantages. The alignment results of DNA sequence of chimpanzee and gorilla are shown.",
"title": ""
}
] |
scidocsrr
|
110ff45b86f8a246a20ef4666945d05f
|
A Domain Adaptation Regularization for Denoising Autoencoders
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "2ade63ea07a7c744c9bbfeab40c4e679",
"text": "Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.",
"title": ""
},
{
"docid": "b53f2f922661bfb14bf2181236fad566",
"text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains",
"title": ""
},
{
"docid": "d8d9bc717157d03c884962999c514033",
"text": "Topic models have been widely used to identify topics in text corpora. It is also known that purely unsupervised models often result in topics that are not comprehensible in applications. In recent years, a number of knowledge-based models have been proposed, which allow the user to input prior knowledge of the domain to produce more coherent and meaningful topics. In this paper, we go one step further to study how the prior knowledge from other domains can be exploited to help topic modeling in the new domain. This problem setting is important from both the application and the learning perspectives because knowledge is inherently accumulative. We human beings gain knowledge gradually and use the old knowledge to help solve new problems. To achieve this objective, existing models have some major difficulties. In this paper, we propose a novel knowledge-based model, called MDK-LDA, which is capable of using prior knowledge from multiple domains. Our evaluation results will demonstrate its effectiveness.",
"title": ""
}
] |
[
{
"docid": "59ce0a9af71c96d684ffb385df1f1f23",
"text": "STUDIES in animals have shown that the amygdala receives highly processed visual input1,2, contains neurons that respond selectively to faces3, and that it participates in emotion4,5 and social behaviour6. Although studies in epileptic patients support its role in emotion7, determination of the amygdala's function in humans has been hampered by the rarity of patients with selective amygdala lesions8. Here, with the help of one such rare patient, we report findings that suggest the human amygdala may be indispensable to: (1) recognize fear in facial expressions; (2) recognize multiple emotions in a single facial expression; but (3) is not required to recognize personal identity from faces. These results suggest that damage restricted to the amygdala causes very specific recognition impairments, and thus constrains the broad notion that the amygdala is involved in emotion.",
"title": ""
},
{
"docid": "ab02c4ebc5449a4371e7ebd22fd0db48",
"text": "A number of marketing phenomena are too complex for conventional analytical or empirical approaches. This makes marketing a costly process of trial and error: proposing, imagining, trying in the real world, and seeing results. Alternatively, Agent-based Social Simulation (ABSS) is becoming the most popular approach to model and study these phenomena. This research paradigm allows modeling a virtual market to: design, understand, and evaluate marketing hypotheses before taking them to the real world. However, there are shortcomings in the specialized literature such as the lack of methods, data, and implemented tools to deploy a realistic virtual market with ABSS. To advance the state of the art in this complex and interesting problem, this paper is a seven-fold contribution based on a (1) method to design and validate viral marketing strategies in Twitter by ABSS. The method is illustrated with the widely studied problem of rumor diffusion in social networks. After (2) an extensive review of the related works for this problem, (3) an innovative spread model is proposed which rests on the exploratory data analysis of two different rumor datasets in Twitter. Besides, (4) new strategies are proposed to control malicious gossips. (5) The experimental results validate the realism of this new propagation model with the datasets and (6) the strategies performance is evaluated over this model. (7) Finally, the article is complemented by a free and open-source simulator.",
"title": ""
},
{
"docid": "7766594b5302dba96c81c5314927cae5",
"text": "This paper presents a method for recognizing human-hand gestures using a model-based approach. A nite state machine is used to model four qualitatively distinct phases of a generic gesture. Fingertips are tracked in multiple frames to compute motion trajectories. The trajectories are then used for nding the start and stop position of the gesture. Gestures are represented as a list of vectors and are then matched to stored gesture vector models using table lookup based on vector displacements. Results are presented showing recognition of seven gestures using images sampled at 4Hz on a SPARC-1 without any special hardware. The seven gestures are representatives for",
"title": ""
},
{
"docid": "7f3686b783273c4df7c4fb41fe7ccefd",
"text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e032ace86d446b4ecacbda453913a373",
"text": "While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the languagemodel likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation dual-NMT. Experiments show that dual-NMT works very well on English↔French translation; especially, by learning from monolingual data (with 10% bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "1b9d74a2f720a75eec5d94736668390e",
"text": "Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing a wealth of information for sensitive and specific diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a dataset of unprecedented size, consisting of 4,875 subjects with 93,500 pixelwise annotated images, which is by far the largest annotated CMR dataset. By combining FCN with a large-scale annotated dataset, we show for the first time that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinical measures. We anticipate this to be a starting point for automated and comprehensive CMR analysis with human-level performance, facilitated by machine learning. It is an important advance on the pathway towards computer-assisted CVD assessment. An estimated 17.7 million people died from cardiovascular diseases (CVDs) in 2015, representing 31% of all global deaths [1]. More people die annually from CVDs than any other cause. Technological advances in medical imaging have led to a number of options for non-invasive investigation of CVDs, including echocardiography, computed tomography (CT), cardiovascular magnetic resonance (CMR) etc., each having its own advantages and disadvantages. Due to its good image quality, excellent soft tissue contrast and absence of ionising radiation, CMR has established itself as the gold standard for assessing cardiac chamber volume and mass for a wide range of CVDs [2–4]. To derive quantitative measures such as volume and mass, clinicians have been relying on manual approaches to trace the cardiac chamber contours. It typically takes a trained",
"title": ""
},
{
"docid": "28b15544f3e054ca483382a471c513e5",
"text": "In this work, design and control system development of a gas-electric hybrid quad tilt-rotor UAV with morphing wing are presented. The proposed aircraft has an all carbon-composite body, gas-electric hybrid electric generation system for 3 hours hovering or up to 10 hours of horizontal flight, a novel configuration for VTOL and airplane-like flights with minimized aerodynamic costs and mechanical morphing wings for both low speed and high speed horizontal flights. The mechanical design of the vehicle is performed to achieve a strong and light-weight structure, whereas the aerodynamic and propulsion system designs are aimed for accomplishing both fixed wing and rotary wing aircraft flights with maximized flight endurance. A detailed dynamic model of the aerial vehicle is developed including the effects of tilting rotors, variable fuel weight, and morphing wing lift-drag forces and pitching moments. Control system is designed for both flight regimes and flight simulations are carried out to check the performance of the proposed control system.",
"title": ""
},
{
"docid": "8d3f65dbeba6c158126ae9d82c886687",
"text": "Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity. THE RELATION BETWEEN STOCK AND BOND RETURNS has been widely studied at the aggregate level ~see, e.g., Keim and Stambaugh ~1986!, Fama and French ~1989, 1993!, Campbell and Ammer ~1993!!. Recently, a few studies have investigated that relation at both the individual firm level ~see, e.g., Kwan ~1996!! and portfolio level ~see, e.g., Blume, Keim, and Patel ~1991!, Cornell and Green ~1991!!. These studies focus on corporate bond returns, or yield changes. The main conclusions of these papers are: ~1! high-grade bonds behave like Treasury bonds, and ~2! low-grade bonds are more sensitive to stock returns. The implications of these studies may be limited in many situations of interest, however. For example, hedge funds often take highly levered positions in corporate bonds while hedging away interest rate risk by shorting treasuries. As a consequence, their portfolios become extremely sensitive to changes in credit spreads rather than changes in bond yields. The distinc* Collin-Dufresne is at Carnegie Mellon University. Goldstein is at Washington University in St. Louis. Martin is at Arizona State University. A significant portion of this paper was written while Goldstein and Martin were at The Ohio State University. We thank Rui Albuquerque, Gurdip Bakshi, Greg Bauer, Dave Brown, Francesca Carrieri, Peter Christoffersen, Susan Christoffersen, Greg Duffee, Darrell Duffie, Vihang Errunza, Gifford Fong, Mike Gallmeyer, Laurent Gauthier, Rick Green, John Griffin, Jean Helwege, Kris Jacobs, Chris Jones, Andrew Karolyi, Dilip Madan, David Mauer, Erwan Morellec, Federico Nardari, N.R. Prabhala, Tony Sanders, Sergei Sarkissian, Bill Schwert, Ken Singleton, Chester Spatt, René Stulz ~the editor!, Suresh Sundaresan, Haluk Unal, Karen Wruck, and an anonymous referee for helpful comments. We thank Ahsan Aijaz, John Puleo, and Laura Tuttle for research assistance. We are also grateful to seminar participants at Arizona State University, University of Maryland, McGill University, The Ohio State University, University of Rochester, and Southern Methodist University. THE JOURNAL OF FINANCE • VOL. LVI, NO. 6 • DEC. 2001",
"title": ""
},
{
"docid": "d896277dfe38400c9e74b7366ad93b6d",
"text": "This work is primarily focused on the design and development of an efficient and cost effective solar photovoltaic generator (PVG) based water pumping system implying a switched reluctance motor (SRM) drive. The maximum extraction of available power from PVG is attained by introducing an incremental conductance (IC) maximum power point tracking (MPPT) controller with Landsman DC-DC converter as a power conditioning stage. The CCM (continuous conduction mode) operation of Landsman DC-DC converter helps to reduce the current and voltage stress on its components and to realize the DC-DC conversion ratio independent of the load. The efficient utilization of SPV array and limiting the high initial inrush current in the motor drive is the primary concern of a Landsman converter. The inclusion of start-up control algorithm in the motor drive system facilitates the smooth self-starting of an 8/6 SRM drive. A novel approach to regulate the speed of the motor-pump system by controlling the DC link voltage of split capacitors converter helps in eliminating the voltage or current sensors required for speed control of SRM drive. The electronic commutated operation of mid-point converter considerably reduces its switching losses. This topology is designed and modeled in Matlab/Simulink platform and a laboratory prototype is developed to validate its performance under varying environmental conditions.",
"title": ""
},
{
"docid": "f6dbce178e428522c80743e735920875",
"text": "With the recent advancement in deep learning, we have witnessed a great progress in single image super-resolution. However, due to the significant information loss of the image downscaling process, it has become extremely challenging to further advance the state-of-theart, especially for large upscaling factors. This paper explores a new research direction in super resolution, called reference-conditioned superresolution, in which a reference image containing desired high-resolution texture details is provided besides the low-resolution image. We focus on transferring the high-resolution texture from reference images to the super-resolution process without the constraint of content similarity between reference and target images, which is a key difference from previous example-based methods. Inspired by recent work on image stylization, we address the problem via neural texture transfer. We design an end-to-end trainable deep model which generates detail enriched results by adaptively fusing the content from the low-resolution image with the texture patterns from the reference image. We create a benchmark dataset for the general research of reference-based super-resolution, which contains reference images paired with low-resolution inputs with varying degrees of similarity. Both objective and subjective evaluations demonstrate the great potential of using reference images as well as the superiority of our results over other state-of-the-art methods.",
"title": ""
},
{
"docid": "04ff71f5c35c8d50869ba78b95036ab9",
"text": "A numerical tool for the simulation of cavitating flows is presented in this paper. The numerical model is based on the 2D Euler/Navier-Stokes equations with a barotropic state law, which are solved in body fitted coordinates using a robust spatial and temporal differencing scheme. The model has been validated comparing numerical and experimental results on ogival and hemispherical axisymmetric head forms at different cavitation numbers. Simulations of the flow around a NACA0015 airfoil have also been successfully performed. Then, 2D simulations of the blade-to-blade flow at a fixed radial position of a 9° Helical Inducer have been carried out at various cavitation numbers. Water has been assumed as the working fluid. Cavitating regions of increasing size have been computed, until the cavitation bubble covers the entire blade profile; a good agreement is observed between the computed cavitation number and the values obtained both experimentally and through an empirical correlation for the case at which choked conditions are attained. Nomenclature a = speed of sound d = inducer axial length K = cavitation index L = generic location p = pressure p T = total pressure ∆P' max = maximum value of pressure correction R = inducer radius x,y = space coordinates u = relative x-velocity component v = relative y-velocity component V = absolute velocity vector W = relative velocity vector Greek α = incidence angle, void fraction β = flow angle β b = blade angle φ = flow coefficient Φ = general variable µ = viscosity ξ = arbitrary number >1 ρ = density σ = cavitation number ω Φ = under-relaxation factor Ω = rotational speed Subscripts l = liquid; pure liquid in the barotropic two-phase representation g = gas; pure vapor in the barotropic two-phase representation H = hub r = radial ref = reference condition s = saturation T = tip v = vapor ∞ = free stream 1 = inlet 2 = outlet",
"title": ""
},
{
"docid": "af8478b5dcb5c1d028fd7ae72989e84a",
"text": "OBJECTIVE\nThe purpose of this study was to compare the results of three types of short segment screw fixation for thoracolumbar burst fracture accompanying osteopenia.\n\n\nMETHODS\nThe records of 70 patients who underwent short segment screw fixation for a thoracolumbar burst fracture accompanying osteopenia (-2.5< mean T score by bone mineral densitometry <-1.0) from January 2005 to January 2008 were reviewed. Patients were divided into three groups based on whether or not bone fusion and bone cement augmentation procedure 1) Group I (n=26) : short segment fixation with posterolateral bone fusion; 2) Group II (n=23) : bone cement augmented short segment fixation with posterolateral bone fusion; 3) Group III (n=21) : bone cement augmented, short segment percutaneous screw fixation without bone fusion. Clinical outcomes were assessed using a visual analogue scale and modified MacNab's criteria. Radiological findings, including kyphotic angle and vertebral height, and procedure-related complications, such as screw loosening or pull-out, were analyzed.\n\n\nRESULTS\nNo significant difference in radiographic or clinical outcomes was noted between patients managed using the three different techniques at last follow up. However, Group I showed more correction loss of kyphotic deformities and vertebral height loss at final follow-up, and Group I had higher screw loosening and implant failure rates than Group II or III.\n\n\nCONCLUSION\nBone cement augmented procedure can be an efficient and safe surgical techniques in terms of achieving better outcomes with minimal complications for thoracolumbar burst fracture accompanying osteopenia.",
"title": ""
},
{
"docid": "7448b45dd5809618c3b6bb667cb1004f",
"text": "We first provide criteria for assessing informed consent online. Then we examine how cookie technology and Web browser designs have responded to concerns about informed consent. Specifically, we document relevant design changes in Netscape Navigator and Internet Explorer over a 5-year period, starting in 1995. Our retrospective analyses leads us to conclude that while cookie technology has improved over time regarding informed consent, some startling problems remain. We specify six of these problems and offer design remedies. This work fits within the emerging field of Value-Sensitive Design.",
"title": ""
},
{
"docid": "b6ef5190b0e1b2020abc4b143be5acc9",
"text": "This paper presents a circuit-compatible compact model for the intrinsic channel region of the MOSFET-like single-walled carbon-nanotube field-effect transistors (CNFETs). This model is valid for CNFET with a wide range of chiralities and diameters and for CNFET with either metallic or semiconducting carbon-nanotube (CNT) conducting channel. The modeled nonidealities include the quantum confinement effects on both circumferential and axial directions, the acoustical/optical phonon scattering in the channel region, and the screening effect by the parallel CNTs for CNFET with multiple CNTs. In order to be compatible with both large-(digital) and small-signal (analog) applications, a complete transcapacitance network is implemented to deliver the real-time dynamic response. This model is implemented with an HSPICE. Using this model, we project a 13 times CV/I improvement of the intrinsic CNFET with (19, 0) CNT over the bulk n-type MOSFET at the 32-nm node. The model described in this paper serves as a starting point toward the complete CNFET-device model incorporating the additional device/circuit-level non-idealities and multiple CNTs reported in the paper of Deng and Wong.",
"title": ""
},
{
"docid": "a608f681a3833d932bf723ca26dfe511",
"text": "The purpose of the study was to explore whether personality traits moderate the association between social comparison on Facebook and subjective well-being, measured as both life satisfaction and eudaimonic well-being. Data were collected via an online questionnaire which measured Facebook use, social comparison behavior and personality traits for 337 respondents. The results showed positive associations between Facebook intensity and both measures of subjective well-being, and negative associations between Facebook social comparison and both measures of subjective well-being. Personality traits were assessed by the Reinforcement Sensitivity Theory personality questionnaire, which revealed that Reward Interest was positively associated with eudaimonic well-being, and Goal-Drive Persistence was positively associated with both measures of subjective well-being. Impulsivity was negatively associated with eudaimonic well-being and the Behavioral Inhibition System was negatively associated with both measures of subjective well-being. Interactions between personality traits and social comparison on Facebook indicated that for respondents with high Goal-Drive Persistence, Facebook social comparison had a positive association with eudaimonic well-being, thus confirming that some personality traits moderate the association between Facebook social comparison and subjective well-being. The results of this study highlight how individual differences in personality may impact how social comparison on Facebook affects individuals’ subjective well-being.",
"title": ""
},
{
"docid": "569700bd1114b1b93a13af25b2051631",
"text": "Empathy and sympathy play crucial roles in much of human social interaction and are necessary components for healthy coexistence. Sympathy is thought to be a proxy for motivating prosocial behavior and providing the affective and motivational base for moral development. The purpose of the present study was to use functional MRI to characterize developmental changes in brain activation in the neural circuits underpinning empathy and sympathy. Fifty-seven individuals, whose age ranged from 7 to 40 years old, were presented with short animated visual stimuli depicting painful and non-painful situations. These situations involved either a person whose pain was accidentally caused or a person whose pain was intentionally inflicted by another individual to elicit empathic (feeling as the other) or sympathetic (feeling concern for the other) emotions, respectively. Results demonstrate monotonic age-related changes in the amygdala, supplementary motor area, and posterior insula when participants were exposed to painful situations that were accidentally caused. When participants observed painful situations intentionally inflicted by another individual, age-related changes were detected in the dorsolateral prefrontal and ventromedial prefrontal cortex, with a gradual shift in that latter region from its medial to its lateral portion. This pattern of activation reflects a change from a visceral emotional response critical for the analysis of the affective significance of stimuli to a more evaluative function. Further, these data provide evidence for partially distinct neural mechanisms subserving empathy and sympathy, and demonstrate the usefulness of a developmental neurobiological approach to the new emerging area of moral neuroscience.",
"title": ""
},
{
"docid": "b05f2cc1590857e7a50d54f6201c8f82",
"text": "Holograms display a 3D image in high resolution and allow viewers to focus freely as if looking through a virtual window, yet computer generated holography (CGH) hasn't delivered the same visual quality under plane wave illumination and due to heavy computational cost. Light field displays have been popular due to their capability to provide continuous focus cues. However, light field displays must trade off between spatial and angular resolution, and do not model diffraction.\n We present a light field-based CGH rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view-dependent occlusion. Our rendering accurately accounts for diffraction and supports various types of reference illuminations for hologram. We avoid under- and over-sampling and geometric clipping effects seen in previous work. We also demonstrate an implementation of light field rendering plus the Fresnel diffraction integral based CGH calculation which is orders of magnitude faster than the state of the art [Zhang et al. 2015], achieving interactive volumetric 3D graphics.\n To verify our computational results, we build a see-through, near-eye, color CGH display prototype which enables co-modulation of both amplitude and phase. We show that our rendering accurately models the spherical illumination introduced by the eye piece and produces the desired 3D imagery at the designated depth. We also analyze aliasing, theoretical resolution limits, depth of field, and other design trade-offs for near-eye CGH.",
"title": ""
},
{
"docid": "0a26e03606cdf93de0958c01ca4c693a",
"text": "A bidirectional full-bridge LLC resonant converter with a new symmetric LLC-type resonant network using a digital control scheme is proposed for a 380V dc power distribution system. This converter can operate under high power conversion efficiency since the symmetric LLC resonant network has zero voltage switching capability for primary power switches and soft commutation capability for output rectifiers. In addition, the proposed topology does not require any clamp circuits to reduce the voltage stress of the switches because the switch voltage of the primary inverting stage is confined by the input voltage, and that of the secondary rectifying stage is limited by the output voltage. Therefore, the power conversion efficiency of any directions is exactly the same as each other. In addition, intelligent digital control schemes such as dead-band control and switch transition control are proposed to regulate output voltage for any power flow directions. A prototype converter designed for a high-frequency galvanic isolation of 380V dc buses was developed with a rated power rating of 5kW using a digital signal processor to verify the performance of the proposed topology and algorithms. The maximum power conversion efficiency was 97.8% during bidirectional operations.",
"title": ""
},
{
"docid": "6a2b9761b745f4ece1bba3fab9f5d8b1",
"text": "Driven by the evolution of consumer-to-consumer (C2C) online marketplaces, we examine the role of communication tools (i.e., an instant messenger, internal message box and a feedback system), in facilitating dyadic online transactions in the Chinese C2C marketplace. Integrating the Chinese concept of guanxi with theories of social translucence and social presence, we introduce a structural model that explains how rich communication tools influence a website’s interactivity and presence, subsequently building trust and guanxi among buyers and sellers, and ultimately predicting buyers’ repurchase intentions. The data collected from 185 buyers in TaoBao, China’s leading C2C online marketplace, strongly support the proposed model. We believe that this research is the first formal study to show evidence of guanxi in online C2C marketplaces, and it is attributed to the role of communication tools to enhance a website’s interactivity and presence.",
"title": ""
}
] |
scidocsrr
|
9b54291018f5551cb85ec920b9361b75
|
Differentiating malware from cleanware using behavioural analysis
|
[
{
"docid": "aee115084c027ff5c69198ae481a860d",
"text": "Malware is software designed to infiltrate or damage a computer system without the owner's informed consent (e.g., viruses, backdoors, spyware, trojans, and worms). Nowadays, numerous attacks made by the malware pose a major security threat to computer users. Unfortunately, along with the development of the malware writing techniques, the number of file samples that need to be analyzed, named \"gray list,\" on a daily basis is constantly increasing. In order to help our virus analysts, quickly and efficiently pick out the malicious executables from the \"gray list,\" an automatic and robust tool to analyze and classify the file samples is needed. In our previous work, we have developed an intelligent malware detection system (IMDS) by adopting associative classification method based on the analysis of application programming interface (API) execution calls. Despite its good performance in malware detection, IMDS still faces the following two challenges: (1) handling the large set of the generated rules to build the classifier; and (2) finding effective rules for classifying new file samples. In this paper, we first systematically evaluate the effects of the postprocessing techniques (e.g., rule pruning, rule ranking, and rule selection) of associative classification in malware detection, and then, propose an effective way, i.e., CIDCPF, to detect the malware from the \"gray list.\" To the best of our knowledge, this is the first effort on using postprocessing techniques of associative classification in malware detection. CIDCPF adapts the postprocessing techniques as follows: first applying Chi-square testing and Insignificant rule pruning followed by using Database coverage based on the Chi-square measure rule ranking mechanism and Pessimistic error estimation, and finally performing prediction by selecting the best First rule. We have incorporated the CIDCPF method into our existing IMDS system, and we call the new system as CIMDS system. Case studies are performed on the large collection of file samples obtained from the Antivirus Laboratory at Kingsoft Corporation and promising experimental results demonstrate that the efficiency and ability of detecting malware from the \"gray list\" of our CIMDS system outperform popular antivirus software tools, such as McAfee VirusScan and Norton Antivirus, as well as previous data-mining-based detection systems, which employed Naive Bayes, support vector machine, and decision tree techniques. In particular, our CIMDS system can greatly reduce the number of generated rules, which makes it easy for our virus analysts to identify the useful ones.",
"title": ""
}
] |
[
{
"docid": "60f31d60213abe65faec3eb69edb1eea",
"text": "In this paper, a novel multi-layer four-way out-of-phase power divider based on substrate integrated waveguide (SIW) is proposed. The four-way power division is realized by 3-D mode coupling; vertical partitioning of a SIW followed by lateral coupling to two half-mode SIW. The measurement results show the excellent insertion loss (S<inf>21</inf>, S<inf>31</inf>, S<inf>41</inf>, S<inf>51</inf>: −7.0 ± 0.5 dB) and input return loss (S<inf>11</inf>: −10 dB) in X-band (7.63 GHz ∼ 11.12 GHz). We expect that the proposed power divider play an important role for the integration of compact multi-way SIW circuits.",
"title": ""
},
{
"docid": "d8536cd772437753b3b9e972ae5653f3",
"text": "Modeling students’ knowledge is a fundamental part of intelligent tutoring systems. One of the most popular methods for estimating students’ knowledge is Corbett and Anderson’s [6] Bayesian Knowledge Tracing model. The model uses four parameters per skill, fit using student performance data, to relate performance to learning. Beck [1] showed that existing methods for determining these parameters are prone to the Identifiability Problem: the same performance data can be fit equally well by different parameters, with different implications on system behavior. Beck offered a solution based on Dirichlet Priors [1], but, we show this solution is vulnerable to a different problem, Model Degeneracy, where parameter values violate the model’s conceptual meaning (such as a student being more likely to get a correct answer if he/she does not know a skill than if he/she does). We offer a new method for instantiating Bayesian Knowledge Tracing, using machine learning to make contextual estimations of the probability that a student has guessed or slipped. This method is no more prone to problems with Identifiability than Beck’s solution, has less Model Degeneracy than competing approaches, and fits student performance data better than prior methods. Thus, it allows for more accurate and reliable student modeling in ITSs that use knowledge tracing.",
"title": ""
},
{
"docid": "d394fd4ec952c1413e8d8c9b3f0acc82",
"text": "A parallel algorithm, called adaptive bitonic sorting, that runs on a PRAC (parallel random access computer), a shared-memory multiprocessor where fetch and store conflicts are disallowed, is proposed. On a P processors PRAC, the algorithm presented here achieves optimal performance TP O(N log N), for any computation time T in the range (log N)<-_ T<= O(N log N). Adaptive bitonic sorting also has a small constant factor, since it performs less than 2N log N comparisons, and only a handful of operations per comparison. Key words, sorting, parallel computation, shared-memory machines, bitonic sequence, time processors optimality AMS(MOS) subject classifications. 68Q20, 68Q25, 68Q10",
"title": ""
},
{
"docid": "8f799fc7625b593694c8b3d85216d27b",
"text": "With the integration of deep learning into the traditional field of reinforcement learning in the recent decades, the spectrum of applications that artificial intelligence caters is currently very broad. As using AI to play games is a traditional application of reinforcement learning, the project’s objective is to implement a deep reinforcement learning agent that can defeat a video game. Since it is often difficult to determine which algorithms are appropriate given the wide selection of state-of-the-art techniques in the discipline, proper comparisons and investigations of the algorithms are a prerequisite to implementing such an agent. As a result, this paper serves as a platform for exploring the possibility and effectiveness of using conventional state-of-the-art methods, such as Deep Q Networks and its variants, such as Double Deep Q Networks, are appropriate for game playing, with Deep Q Networks successful in playing a randomized map, further work in this project is needed in order for a comprehensive view of the discipline. Such work in the near future includes the investigation of the use of deep reinforcement learning on games unreported in the literature, or potential improvement to existing deep reinforcement learning techniques. In spite of the technical difficulties encountered and minor amendments to the project schedule, the project is still currently on schedule, ie. approximately 50% complete.",
"title": ""
},
{
"docid": "e3d9d30900b899bcbf54cbd1b5479713",
"text": "A new test method has been implemented for testing the EMC performance of small components like small connectors and IC's, mainly used in mobile applications. The test method is based on the EMC-stripline method. Both emission and immunity can be tested up to 6GHz, based on good RF matching conditions and with high field strengths.",
"title": ""
},
{
"docid": "0506949c45febe7ce99e3f37cd7edcf2",
"text": "Present study demonstrated that fibrillar β-amyloid peptide (fAβ1-42) induced ATP release, which in turn activated NADPH oxidase via the P2X7 receptor (P2X7R). Reactive oxygen species (ROS) production in fAβ1-42-treated microglia appeared to require Ca2+ influx from extracellular sources, because ROS generation was abolished to control levels in the absence of extracellular Ca2+. Considering previous observation of superoxide generation by Ca2+ influx through P2X7R in microglia, we hypothesized that ROS production in fAβ-stimulated microglia might be mediated by ATP released from the microglia. We therefore examined whether fAβ1-42-induced Ca2+ influx was mediated through P2X7R activation. In serial experiments, we found that microglial pretreatment with the P2X7R antagonists Pyridoxal-phosphate-6-azophenyl-2',4'- disulfonate (100 µM) or oxidized ATP (100 µM) inhibited fAβ-induced Ca2+ influx and reduced ROS generation to basal levels. Furthermore, ATP efflux from fAβ1-42-stimulated microglia was observed, and apyrase treatment decreased the generation of ROS. These findings provide conclusive evidence that fAβ-stimulated ROS generation in microglial cells is regulated by ATP released from the microglia in an autocrine manner.",
"title": ""
},
{
"docid": "0072941488ef0e22b06d402d14cbe1be",
"text": "This chapter is about computational modelling of the process of musical composition, based on a cognitive model of human behaviour. The idea is to try to study not only the requirements for a computer system which is capable of musical composition, but also to relate it to human behaviour during the same process, so that it may, perhaps, work in the same way as a human composer, but also so that it may, more likely, help us understand how human composers work. Pearce et al. (2002) give a fuller discussion of the motivations behind this endeavour.",
"title": ""
},
{
"docid": "79f7f7294f23ab3aace0c4d5d589b4a8",
"text": "Along with the expansion of globalization, multilingualism has become a popular social phenomenon. More than one language may occur in the context of a single conversation. This phenomenon is also prevalent in China. A huge variety of informal Chinese texts contain English words, especially in emails, social media, and other user generated informal contents. Since most of the existing natural language processing algorithms were designed for processing monolingual information, mixed multilingual texts cannot be well analyzed by them. Hence, it is of critical importance to preprocess the mixed texts before applying other tasks. In this paper, we firstly analyze the phenomena of mixed usage of Chinese and English in Chinese microblogs. Then, we detail the proposed two-stage method for normalizing mixed texts. We propose to use a noisy channel approach to translate in-vocabulary words into Chinese. For better incorporating the historical information of users, we introduce a novel user aware neural network language model. For the out-of-vocabulary words (such as pronunciations, informal expressions and et al.), we propose to use a graph-based unsupervised method to categorize them. Experimental results on a manually annotated microblog dataset demonstrate the effectiveness of the proposed method. We also evaluate three natural language parsers with and without using the proposed method as the preprocessing step. From the results, we can see that the proposed method can significantly benefit other NLP tasks in processing mixed text.",
"title": ""
},
{
"docid": "1492c9c12d2ae969e1b45831f642943f",
"text": "In this paper, a novel polarization-reconfigurable converter (PRC) is proposed based on a multilayer frequency-selective surface (MFSS). First, the MFSS is designed using the square patches and the grid lines array to determine the operational frequency and bandwidth, and then the corners of the square patches are truncated to produce the phase difference of 90° between the two orthogonal linear components for circular polarization performance. To analyze and synthesize the PRC array, the operational mechanism is described in detail. The relation of the polarization states as a function of the rotating angle of the PRC array is summarized from the principle of operation. Therefore, the results show that the linear polarization (LP) from an incident wave can be reconfigured to LP, right- and left-hand circular polarizations by rotating the free-standing converter screen. The cell periods along x- and y-directions are the same, and their total height is 6 mm. The fractional bandwidth of axial ratio (AR) less than 3 dB is more than 15% with respect to the center operating frequency of 10 GHz at normal incidence. Simultaneously, the AR characteristics of different incidence angles for oblique incidence with TE and TM polarizations show that the proposed PRC has good polarization and angle stabilities. Moreover, the general design procedure and method is presented. Finally, a circularly shaped PRC array using the proposed PRC element based on the MFSS design is fabricated and measured. The agreement between the simulated and measured results is excellent.",
"title": ""
},
{
"docid": "1d5aee3a22f540f6bb8ae619cdc9935d",
"text": "In emergency situations, actions that save lives and limit the impact of hazards are crucial. In order to act, situational awareness is needed to decide what to do. Geolocalized photos and video of the situations as they evolve can be crucial in better understanding them and making decisions faster. Cameras are almost everywhere these days, either in terms of smartphones, installed CCTV cameras, UAVs or others. However, this poses challenges in big data and information overflow. Moreover, most of the time there are no disasters at any given location, so humans aiming to detect sudden situations may not be as alert as needed at any point in time. Consequently, computer vision tools can be an excellent decision support. The number of emergencies where computer vision tools has been considered or used is very wide, and there is a great overlap across related emergency research. Researchers tend to focus on state-of-the-art systems that cover the same emergency as they are studying, obviating important research in other fields. In order to unveil this overlap, the survey is divided along four main axes: the types of emergencies that have been studied in computer vision, the objective that the algorithms can address, the type of hardware needed and the algorithms used. Therefore, this review provides a broad overview of the progress of computer vision covering all sorts of emergencies.",
"title": ""
},
{
"docid": "372fa95863cf20fdcb632d033cb4d944",
"text": "Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.",
"title": ""
},
{
"docid": "068be5b13515937ed76592bf8a9782ce",
"text": "We outline the core components of a modulation recognition system that uses hierarchical deep neural networks to identify data type, modulation class and modulation order. Our system utilizes a flexible front-end detector that performs energy detection, channelization and multi-band reconstruction on wideband data to provide raw narrowband signal snapshots. We automatically extract features from these snapshots using convolutional neural network layers, which produce decision class estimates. Initial experimentation on a small synthetic radio frequency dataset indicates the viability of deep neural networks applied to the communications domain. We plan to demonstrate this system at the Battle of the Mod Recs Workshop at IEEE DySpan 2017.",
"title": ""
},
{
"docid": "0a4749ecc23cb04f494a987268704f0f",
"text": "With the growing demand for digital information in health care, the electronic medical record (EMR) represents the foundation of health information technology. It is essential, however, in an industry still largely dominated by paper-based records, that such systems be accepted and used. This research evaluates registered nurses’, certified nurse practitioners and physician assistants’ acceptance of EMR’s as a means to predict, define and enhance use. The research utilizes the Unified Theory of Acceptance and Use of Technology (UTAUT) as the theoretical model, along with the Partial Least Square (PLS) analysis to estimate the variance. Overall, the findings indicate that UTAUT is able to provide a reasonable assessment of health care professionals’ acceptance of EMR’s with social influence a significant determinant of intention and use.",
"title": ""
},
{
"docid": "4362bc019deebc239ba4b6bc2fee446e",
"text": "observed. It was mainly due to the developments in biological studies, the change of a population lifestyle and the increase in the consumer awareness concerning food products. The health quality of food depends mainly on nutrients, but also on foreign substances such as food additives. The presence of foreign substances in the food can be justified, allowed or tolerated only when they are harmless to our health. Epidemic obesity and diabetes encouraged the growth of the artificial sweetener industry. There are more and more people who are trying to lose weight or keeping the weight off; therefore, sweeteners can be now found in almost all food products. There are two main types of sweeteners, i.e., nutritive and artificial ones. The latter does not provide calories and will not influence blood glucose; however, some of nutritive sweeteners such as sugar alcohols also characterize with lower blood glucose response and can be metabolized without insulin, being at the same time natural compounds. Sugar alcohols (polyols or polyhydric alcohols) are low digestible carbohydrates, which are obtained by substituting and aldehyde group with a hydroxyl one [1, 2]. As most of sugar alcohols are produced from their corresponding aldose sugars, they are also called alditols [3]. Among sugar alcohols can be listed hydrogenated monosaccharides (sorbitol, mannitol), hydrogenated disaccharides (isomalt, maltitol, lactitol) and mixtures of hydrogenated mono-diand/or oligosaccharides (hydrogenated starch hydrolysates) [1, 2, 4]. Polyols are naturally present in smaller quantities in fruits as well as in certain kinds of vegetables or mushrooms, and they are also regulated as either generally recognized as safe or food additives [5–7]. Food additives are substances that are added intentionally to foodstuffs in order to perform certain technological functions such as to give color, to sweeten or to help in food preservation. Abstract Epidemic obesity and diabetes encouraged the changes in population lifestyle and consumers’ food products awareness. Food industry has responded people’s demand by producing a number of energy-reduced products with sugar alcohols as sweeteners. These compounds are usually produced by a catalytic hydrogenation of carbohydrates, but they can be also found in nature in fruits, vegetables or mushrooms as well as in human organism. Due to their properties, sugar alcohols are widely used in food, beverage, confectionery and pharmaceutical industries throughout the world. They have found use as bulk sweeteners that promote dental health and exert prebiotic effect. They are added to foods as alternative sweeteners what might be helpful in the control of calories intake. Consumption of low-calorie foods by the worldwide population has dramatically increased, as well as health concerns associated with the consequent high intake of sweeteners. This review deals with the role of commonly used sugar alcohols such as erythritol, isomalt, lactitol, maltitol, mannitol, sorbitol and xylitol as sugar substitutes in food industry.",
"title": ""
},
{
"docid": "5cbd0a5d458ac5cb596cd7f0f627d79a",
"text": "End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision may fail to correctly render the planning problem inherent to dialogue as well as its contextual and grounded nature. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on the question generation task from the dataset GuessWhat?! containing 120k dialogues and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex image.",
"title": ""
},
{
"docid": "4248ea350416596301e551dd48334770",
"text": "The era of big data has led to the emergence of new systems for real-time distributed stream processing, e.g., Apache Storm is one of the most popular stream processing systems in industry today. However, Storm, like many other stream processing systems lacks an intelligent scheduling mechanism. The default round-robin scheduling currently deployed in Storm disregards resource demands and availability, and can therefore be inefficient at times. We present R-Storm (Resource-Aware Storm), a system that implements resource-aware scheduling within Storm. R-Storm is designed to increase overall throughput by maximizing resource utilization while minimizing network latency. When scheduling tasks, R-Storm can satisfy both soft and hard resource constraints as well as minimizing network distance between components that communicate with each other. We evaluate R-Storm on set of micro-benchmark Storm applications as well as Storm applications used in production at Yahoo! Inc. From our experimental results we conclude that R-Storm achieves 30-47% higher throughput and 69-350% better CPU utilization than default Storm for the micro-benchmarks. For the Yahoo! Storm applications, R-Storm outperforms default Storm by around 50% based on overall throughput. We also demonstrate that R-Storm performs much better when scheduling multiple Storm applications than default Storm.",
"title": ""
},
{
"docid": "ec49f419b86fc4276ceba06fd0208749",
"text": "In order to organize the large number of products listed in e-commerce sites, each product is usually assigned to one of the multi-level categories in the taxonomy tree. It is a time-consuming and difficult task for merchants to select proper categories within thousan ds of options for the products they sell. In this work, we propose an automatic classification tool to predict the matching category for a given product title and description. We used a combinatio n of two different neural models, i.e., deep belief nets and deep autoencoders, for both titles and descriptions. We implemented a selective reconstruction approach for the input layer during the training of the deep neural networks, in order to scale-out for large-sized sparse feature vectors. GPUs are utilized in order to train neural networks in a reasonable time. We have trained o ur m dels for around 150 million products with a taxonomy tree with at most 5 levels that contains 28,338 leaf categories. Tests with millions of products show that our first prediction s matches 81% of merchants’ assignments, when “others” categories are excluded.",
"title": ""
},
{
"docid": "3363ae88df77cfe0cf54d071b9b6774b",
"text": "This chapter aims to illustrate the relationship between sociocultural globalisation and body image – globalising mechanisms appear to disseminate the Western standard of female and male beauty. Thus, the combination of ubiquitous messages for eating behaviours and beauty in both advertising and mass media programming may lead to confusion and body image dissatisfaction amongst many young people. The aim is to examine the framework of causality of culturally-induced manifestations of eating and body image disorders by gender and, in particular, to examine the role that the mass media plays in the development of male and female body attitudes regarding ideal body images and how this may in turn have an impact on their mental health.",
"title": ""
},
{
"docid": "d6052f08d99f40476fad967d9df34706",
"text": "Recommending news articles has become a promising research direction as the Internet provides fast access to real-time information from multiple sources around the world. Traditional news recommendation systems strive to adapt their services to individual users by virtue of both user and news content information. However, the latent relationships among different news items, and the special properties of new articles, such as short shelf lives and value of immediacy, render the previous approaches inefficient.\n In this paper, we propose a scalable two-stage personalized news recommendation approach with a two-level representation, which considers the exclusive characteristics (e.g., news content, access patterns, named entities, popularity and recency) of news items when performing recommendation. Also, a principled framework for news selection based on the intrinsic property of user interest is presented, with a good balance between the novelty and diversity of the recommended result. Extensive empirical experiments on a collection of news articles obtained from various news websites demonstrate the efficacy and efficiency of our approach.",
"title": ""
},
{
"docid": "23e18cb7783764b1c0bb71285bb20778",
"text": "This paper deals with the phenomenon of online self-disclosure. Two qualitative data analyses of YouTube videos were conducted. The studies revealed emerging forms of self-disclosure online, which are not necessarily bound to conditions of visual anonymity. This finding puts previous research results into question, which stress the strong correlation between self-disclosure and visual anonymity. The results of both qualitative studies showed that people also tend to disclose information in (visually) non-anonymous settings. The paper concludes by presenting a revised model of online self-disclosure and describing enhancing factors for self-disclosing behaviour on the internet based on the latest research results. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
80f528e55cad731dd272cf339e616b4a
|
Evolving multimodal controllers with HyperNEAT
|
[
{
"docid": "f76eae1326c6767c520bc4d318b239fd",
"text": "A challenging goal of generative and developmental systems (GDS) is to effectively evolve neural networks as complex and capable as those found in nature. Two key properties of neural structures in nature are regularity and modularity. While HyperNEAT has proven capable of generating neural network connectivity patterns with regularities, its ability to evolve modularity remains in question. This paper investigates how altering the traditional approach to determining whether connections are expressed in HyperNEAT influences modularity. In particular, an extension is introduced called a Link Expression Output (HyperNEAT-LEO) that allows HyperNEAT to evolve the pattern of weights independently from the pattern of connection expression. Because HyperNEAT evolves such patterns as functions of geometry, important general topographic principles for organizing connectivity can be seeded into the initial population. For example, a key topographic concept in nature that encourages modularity is locality, that is, components of a module are located near each other. As experiments in this paper show, by seeding HyperNEAT with a bias towards local connectivity implemented through the LEO, modular structures arise naturally. Thus this paper provides an important clue to how an indirect encoding of network structure can be encouraged to evolve modularity.",
"title": ""
}
] |
[
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "bd64a38a507001f0b17098138f297cc7",
"text": "Affect sensitivity is of the utmost importance for a robot companion to be able to display socially intelligent behaviour, a key requirement for sustaining long-term interactions with humans. This paper explores a naturalistic scenario in which children play chess with the iCat, a robot companion. A person-independent, Bayesian approach to detect the user's engagement with the iCat robot is presented. Our framework models both causes and effects of engagement: features related to the user's non-verbal behaviour, the task and the companion's affective reactions are identified to predict the children's level of engagement. An experiment was carried out to train and validate our model. Results show that our approach based on multimodal integration of task and social interaction-based features outperforms those based solely on non-verbal behaviour or contextual information (94.79 % vs. 93.75 % and 78.13 %).",
"title": ""
},
{
"docid": "5b617701a4f2fa324ca7e3e7922ce1c4",
"text": "Open circuit voltage of a silicon solar cell is around 0.6V. A solar module is constructed by connecting a number of cells in series to get a practically usable voltage. Partial shading of a Solar Photovoltaic Module (SPM) is one of the main causes of overheating of shaded cells and reduced energy yield of the module. The present work is a study of harmful effects of partial shading on the performance of a PV module. A PSPICE simulation model that represents 36 cells PV module under partial shaded conditions has been used to test several shading profiles and results are presented.",
"title": ""
},
{
"docid": "570b751e1550e25f77b97916a6c8ec1d",
"text": "Im Beitrag werden die Auswirkungen der Digitalisierung auf die Geschäftsmodelle von Industrieunternehmen im Zusammenhang mit der Entwicklung integrierter, datenbasierter Produkt-Dienstleistungsbündel untersucht. Beispiele hierfür sind Produzenten von Automatisierungsrobotern, die zusätzlich zum physischen Kernprodukt ihren Kunden verknüpfte, digitale Services zur intelligenten Steuerung, Optimierung oder Wartung anbieten. Dabei können durch die Analyse der im laufenden Betrieb erzeugten Daten neue kundenindividuelle Lösungsangebote geschaffen und damit zusätzlicher Kundennutzen generiert werden. Industrieunternehmen können somit durch die Entwicklung digitaler Geschäftsmodelle entscheidende Wettbewerbsvorteile generieren und neue Märkte erschließen. Im Beitrag werden zunächst die Auswirkungen der Digitalisierung auf die Geschäftsmodelle von Industrieunternehmen untersucht und anhand des Business Model Canvas als etablierte Methode zur Geschäftsmodellentwicklung strukturiert dargestellt und beurteilt. Auf Basis von fünf Interviews mit Experten von führenden Unternehmen in verschiedenen Schlüsselbranchen werden zentrale Auswirkungen, die daraus resultierenden Herausforderungen sowie praxisrelevante Handlungsempfehlungen diskutiert und abgeleitet. Veranschaulicht werden die mit der Digitalisierung einhergehenden Entwicklungen anhand einer Fallstudie am Beispiel von Mitsubishi Electric. Durch den Beitrag werden Praktikern die mit der digitalen Transformation von Geschäftsmodellen einhergehenden Auswirkungen erläutert sowie Ansatzpunkte für den Wandel hin zur digitalen, hybriden Wertschöpfung aufgezeigt. The paper examines the impact of digitization on industrial companies’ business models in the context of the development of integrated, data-based product service packages. Examples of this are manufacturers of automation robots that offer their customers digital services for intelligent control, optimization or maintenance in addition to their core physical product. By analyzing the data generated during ongoing operations, it is possible to create new customized solutions and thus generate additional customer benefits. Industrial companies can thus generate decisive competitive advantages and open up new markets by developing digital business models. The paper first examines the effects of digitization on industrial companies’ business models and uses the Business Model Canvas as an established method for business model development to present and evaluate them in a structured way. On the basis of five interviews with experts from leading companies in various key industries, key impacts, the resulting challenges, and practical recommendations for action are discussed and derived. The developments associated with digitization are illustrated by a case study based on the example of Mitsubishi Electric. The paper introduces practitioners to the effects of the digital transformation of business and provides starting points for the transition to digital, hybrid value creation.",
"title": ""
},
{
"docid": "eeda5d5d38c5876231bda36d175a476c",
"text": "A key issue for marketers resulting from the dramatic rise of social media is how it can be leveraged to generate value for firms. Whereas the importance of social media for brand management and customer relationship management is widely recognized, it is unclear whether social media can also help companies market and sell products. Extant discussions of social commerce present a variety of perspectives, but the core issue remains unresolved. This paper aims to make two contributions. First, to address the lack of clarity in the literature regarding the meaning and domain of social commerce, the paper offers a definition stemming from important research streams in marketing. This definition allows for both a broad (covering all steps of the consumer decision process) and a narrow (focusing on the purchase act itself) construal of social commerce. Second, we build on this definition and develop a contingency framework for assessing the marketing potential that social commerce has to offer to firms. Implications for researchers and managers, based on the proposed definition and framework, are also discussed. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5464889be41072ecff03355bf45c289f",
"text": "Grid map registration is an important field in mobile robotics. Applications in which multiple robots are involved benefit from multiple aligned grid maps as they provide an efficient exploration of the environment in parallel. In this paper, a normal distribution transform (NDT)-based approach for grid map registration is presented. For simultaneous mapping and localization approaches on laser data, the NDT is widely used to align new laser scans to reference scans. The original grid quantization-based NDT results in good registration performances but has poor convergence properties due to discontinuities of the optimization function and absolute grid resolution. This paper shows that clustering techniques overcome disadvantages of the original NDT by significantly improving the convergence basin for aligning grid maps. A multi-scale clustering method results in an improved registration performance which is shown on real world experiments on radar data.",
"title": ""
},
{
"docid": "2b09ae15fe7756df3da71cfc948e9506",
"text": "Repair of the injured spinal cord by regeneration therapy remains an elusive goal. In contrast, progress in medical care and rehabilitation has resulted in improved health and function of persons with spinal cord injury (SCI). In the absence of a cure, raising the level of achievable function in mobility and self-care will first and foremost depend on creative use of the rapidly advancing technology that has been so widely applied in our society. Building on achievements in microelectronics, microprocessing and neuroscience, rehabilitation medicine scientists have succeeded in developing functional electrical stimulation (FES) systems that enable certain individuals with SCI to use their paralyzed hands, arms, trunk, legs and diaphragm for functional purposes and gain a degree of control over bladder and bowel evacuation. This review presents an overview of the progress made, describes the current challenges and suggests ways to improve further FES systems and make these more widely available.",
"title": ""
},
{
"docid": "c3b6bfea2d83ea2d88aa70e64e2572f3",
"text": "A wide-range of applications, including Publish/Subscribe, Workflow, and Web-site Personalization, require maintaining user’s interest in expected data as conditional expressions. This paper proposes to manage such expressions as data in Relational Database Systems (RDBMS). This is accomplished 1) by allowing expressions to be stored in a column of a database table and 2) by introducing a SQL EVALUATE operator to evaluate expressions for given data. Expressions when combined with predicates on other forms of data in a database, are just a flexible and powerful way of expressing interest in a data item. The ability to evaluate expressions (via EVALUATE operator) in SQL, enables applications to take advantage of the expressive power of SQL to support complex subscription models. The paper describes the key concepts, presents our approach of managing expressions in Oracle RDBMS, discusses a novel indexing scheme that allows efficient filtering of a large set of expressions, and outlines future directions.",
"title": ""
},
{
"docid": "eaec7fb5490ccabd52ef7b4b5abd25f6",
"text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.",
"title": ""
},
{
"docid": "78d7c2cee14c229e7a12936a0df126e8",
"text": "Online review can help people getting more information about store and product. The potential customers tend to make decision according to it. However, driven by profit, spammers post spurious reviews to mislead the customers by promoting or demoting target store. Previous studies mainly utilize rating as indicator for the detection. However, these studies ignore an important problem that the rating will not necessarily represent the sentiment accurately. In this paper, we first incorporate the sentiment analysis techniques into review spam detection. The proposed method compute sentiment score from the natural language text by a shallow dependency parser. We further discuss the relationship between sentiment score and spam reviews. A series of discriminative rules are established through intuitive observation. In the end, this paper establishes a time series combined with discriminative rules to detect the spam store and spam review efficiently. Experimental results show that the proposed methods in this paper have good detection result and outperform existing methods.",
"title": ""
},
{
"docid": "87d51053f5e66aefaf24318cc2b3ba22",
"text": "In this paper we study the distribution of average user rating of entities in three different domains: restaurants, movies, and products. We find that the distribution is heavily skewed, closely resembling a log-normal in all the cases. In contrast, the distribution of average critic rating is much closer to a normal distribution. We propose user selection bias as the underlying behavioral phenomenon causing this disparity in the two distributions. We show that selection bias can indeed lead to a skew in the distribution of user ratings even when we assume the quality of entities are normally distributed. Finally, we apply these insights to the problem of predicting the overall rating of an entity given its few initial ratings, and obtain a simple method that outperforms strong baselines.",
"title": ""
},
{
"docid": "02a16c7a94b57cbfa2939d1965a7ac89",
"text": "The emergence of antibiotic resistance in pathogenic bacteria has led to renewed interest in exploring the potential of plant-derived antimicrobials (PDAs) as an alternative therapeutic strategy to combat microbial infections. Historically, plant extracts have been used as a safe, effective, and natural remedy for ailments and diseases in traditional medicine. Extensive research in the last two decades has identified a plethora of PDAs with a wide spectrum of activity against a variety of fungal and bacterial pathogens causing infections in humans and animals. Active components of many plant extracts have been characterized and are commercially available; however, research delineating the mechanistic basis of their antimicrobial action is scanty. This review highlights the potential of various plant-derived compounds to control pathogenic bacteria, especially the diverse effects exerted by plant compounds on various virulence factors that are critical for pathogenicity inside the host. In addition, the potential effect of PDAs on gut microbiota is discussed.",
"title": ""
},
{
"docid": "ac9cd78f06be74297bd28a32b5def23c",
"text": "An application of reinforcement learning to a linear-quadratic, differential game is presented. The reinforcement learning system uses a recently developed algorithm, the residual gradient form of advantage updating. The game is a Markov Decision Process (MDP) with continuous time, states, and actions, linear dynamics, and a quadratic cost function. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. The reinforcement learning algorithm for optimal control is modified for differential games in order to find the minimax Presented at the Neural Information Processing Systems Conference, Denver, Colorado, November 28 December 3, 1994. point, rather than the maximum. Simulation results are compared to the optimal solution, demonstrating that the simulated reinforcement learning system converges to the optimal answer. The performance of both the residual gradient and non-residual gradient forms of advantage updating and Qlearning are compared. The results show that advantage updating converges faster than Q-learning in all simulations. The results also show advantage updating converges regardless of the time step duration; Q-learning is unable to converge as the time step duration grows small. * U.S.A.F. Academy, 2354 Fairchild Dr. Suite 6K41, USAFA, CO 80840-6234 1 ADVANTAGE UPDATING The advantage updating algorithm (Baird, 1993) is a reinforcement learning algorithm in which two types of information are stored. For each state x, the value V(x) is stored, representing an estimate of the total discounted return expected when starting in state x and performing optimal actions. For each state x and action u, the advantage, A(x,u), is stored, representing an estimate of the degree to which the expected total discounted reinforcement is increased by performing action u rather than the action currently considered best. The optimal value function V*(x) represents the true value of each state. The optimal advantage function A*(x,u) will be zero if u is the optimal action (because u confers no advantage relative to itself) and A*(x,u) will be negative for any suboptimal u (because a suboptimal action has a negative advantage relative to the best action). The optimal advantage function A* can be defined in terms of the optimal value function V*: ( ) ( ) [ ] A x u t R x u V x V x t t * * * , , ( ) ( ' ) = − + 1 ∆ ∆ ∆ γ (1) The definition of an advantage includes a 1/Dt term to ensure that, for small time step duration Dt, the advantages will not all go to zero. Both the value function and the advantage function are needed during learning, but after convergence to optimality, the policy can be extracted from the advantage function alone. The optimal policy for state x is any u that maximizes A*(x,u). The notation A x A x u u max ( ) max ( , ) = (2) defines Amax(x). If Amax converges to zero in every state, the advantage function is said to be normalized. Advantage updating has been shown to learn faster than Q-learning (Watkins, 1989), especially for continuous-time problems (Baird, 1993). If advantage updating (Baird, 1993) is used to control a deterministic system, there are two equations that are the equivalent of the Bellman equation in value iteration (Bertsekas, 1987). These are a pair of two simultaneous equations (Baird, 1993): ( ) A x u A x u R V x V x t u t ( , ) max ( , ' ) ( ' ) ( ) ' − = + − γ ∆ ∆ 1 (3) max ( , ) u A x u = 0 (4) where a time step is of duration Dt, and performing action u in state x results in a reinforcement of R and a transition to state xt+Dt. The optimal advantage and value functions will satisfy these equations. For a given A and V function, the Bellman residual errors, E, as used in Williams and Baird (1993) and defined here as equations (5) and (6).are the degrees to which the two equations are not satisfied: ( ) E x u R x u V x V x t A x u A x u t t t t t t t u t 1 1 ( , ) ( , ) ( ) ( ) ( , ) max ( , ' ) ' = + − − + + γ ∆ ∆ ∆ (5) E x u A x u u 2 ( , ) max ( , ) = − (6) 2 RESIDUAL GRADIENT ALGORITHMS Dynamic programming algorithms can be guaranteed to converge to optimality when used with look-up tables, yet be completely unstable when combined with function-approximation systems (Baird & Harmon, In preparation). It is possible to derive an algorithm that has guaranteed convergence for a quadratic function approximation system (Bradtke, 1993), but that algorithm is specific to quadratic systems. One solution to this problem is to derive a learning algorithm to perform gradient descent on the mean squared Bellman residuals given in (5) and (6). This is called the residual gradient form of an algorithm. There are two Bellman residuals, (5) and (6), so the residual gradient algorithm must perform gradient descent on the sum of the two squared Bellman residuals. It has been found to be useful to combine reinforcement learning algorithms with function approximation systems (Tesauro, 1990 & 1992). If function approximation systems are used for the advantage and value functions, and if the function approximation systems are parameterized by a set of adjustable weights, and if the system being controlled is deterministic, then, for incremental learning, a given weight W in the function-approximation system could be changed according to equation (7) on each time step:",
"title": ""
},
{
"docid": "9ee426885fe9b873992d4c59aa569db6",
"text": "We introduce two data augmentation and normalization techniques, which, used with a CNN-LSTM, significantly reduce Word Error Rate (WER) and Character Error Rate (CER) beyond best-reported results on handwriting recognition tasks. (1) We apply a novel profile normalization technique to both word and line images. (2) We augment existing text images using random perturbations on a regular grid. We apply our normalization and augmentation to both training and test images. Our approach achieves low WER and CER over hundreds of authors, multiple languages and a variety of collections written centuries apart. Image augmentation in this manner achieves state-of-the-art recognition accuracy on several popular handwritten word benchmarks.",
"title": ""
},
{
"docid": "f3e4892a0cc4bfe895d4b3c26440ee9a",
"text": "A compact dual band-notched ultra-wideband (UWB) multiple-input multiple-output (MIMO) antenna with high isolation is designed on a FR4 substrate (27 × 30 × 0.8 mm3). To improve the input impedance matching and increase the isolation for the frequencies ≥ 4.0 GHz, the two antenna elements with compact size of 5.5 × 11 mm2 are connected to the two protruded ground parts, respectively. A 1/3 λ rectangular metal strip producing a 1.0 λ loop path with the corresponding antenna element is used to obtain the notched frequency from 5.15 to 5.85 GHz. For the rejected band of 3.30-3.70 GHz, a 1/4 λ open slot is etched into the radiator. Moreover, the two protruded ground parts are connected by a compact metal strip to reduce the mutual coupling for the band of 3.0-4.0 GHz. The simulated and measured results show a bandwidth with |S11| ≤ -10 dB, |S21| ≤ -20 dB and frequency ranged from 3.0 to 11.0 GHz excluding the two rejected bands, is achieved, and all the measured and calculated results show the proposed UWB MIMO antenna is a good candidate for UWB MIMO systems.",
"title": ""
},
{
"docid": "693dd8eb0370259c4ee5f8553de58443",
"text": "Most research in Interactive Storytelling (IS) has sought inspiration in narrative theories issued from contemporary narratology to either identify fundamental concepts or derive formalisms for their implementation. In the former case, the theoretical approach gives raise to empirical solutions, while the latter develops Interactive Storytelling as some form of “computational narratology”, modeled on computational linguistics. In this paper, we review the most frequently cited theories from the perspective of IS research. We discuss in particular the extent to which they can actually inspire IS technologies and highlight key issues for the effective use of narratology in IS.",
"title": ""
},
{
"docid": "3675229608c949f883b7e400a19b66bb",
"text": "SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.",
"title": ""
},
{
"docid": "b24fc322e0fec700ec0e647c31cfd74d",
"text": "Organometal trihalide perovskite solar cells offer the promise of a low-cost easily manufacturable solar technology, compatible with large-scale low-temperature solution processing. Within 1 year of development, solar-to-electric power-conversion efficiencies have risen to over 15%, and further imminent improvements are expected. Here we show that this technology can be successfully made compatible with electron acceptor and donor materials generally used in organic photovoltaics. We demonstrate that a single thin film of the low-temperature solution-processed organometal trihalide perovskite absorber CH3NH3PbI3-xClx, sandwiched between organic contacts can exhibit devices with power-conversion efficiency of up to 10% on glass substrates and over 6% on flexible polymer substrates. This work represents an important step forward, as it removes most barriers to adoption of the perovskite technology by the organic photovoltaic community, and can thus utilize the extensive existing knowledge of hybrid interfaces for further device improvements and flexible processing platforms.",
"title": ""
},
{
"docid": "d438491c76e6afcdd7ad9a6351f1fda8",
"text": "Acoustic word embeddings — fixed-dimensional vector representations of variable-length spoken word segments — have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to the same word, while being dissimilar for segments corresponding to different words. Recent work has found that acoustic word embeddings can outperform dynamic time warping on query-by-example search and related word discrimination tasks. However, the space of embedding models and training approaches is still relatively unexplored. In this paper we present new discriminative embedding models based on recurrent neural networks (RNNs). We consider training losses that have been successful in prior work, in particular a cross entropy loss for word classification and a contrastive loss that explicitly aims to separate same-word and different-word pairs in a “Siamese network” training setting. We find that both classifier-based and Siamese RNN embeddings improve over previously reported results on a word discrimination task, with Siamese RNNs outperforming classification models. In addition, we present analyses of the learned embeddings and the effects of variables such as dimensionality and network structure.",
"title": ""
},
{
"docid": "cba3209a27e1332f25f29e8b2c323d37",
"text": "One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes.",
"title": ""
}
] |
scidocsrr
|
012c3248a3012536b581907d710b5829
|
ambientROOM: integrating ambient media with architectural space
|
[
{
"docid": "1dc07b02a70821fdbaa9911755d1e4b0",
"text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.",
"title": ""
}
] |
[
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "d8d17aa5e709ebd4dda676eadb531ef3",
"text": "The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1/mAP=96.6%/94.2% with this method after re-ranking.",
"title": ""
},
{
"docid": "9497731525a996844714d5bdbca6ae03",
"text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.",
"title": ""
},
{
"docid": "abfc35847be162ff8744c6e5d8d67d74",
"text": "With the rapid growth of the amount of information, cloud computing servers need to process and analyze large amounts of high-dimensional and unstructured data timely and accurately. This usually requires many query operations. Due to simplicity and ease of use, cuckoo hashing schemes have been widely used in real-world cloud-related applications. However, due to the potential hash collisions, the cuckoo hashing suffers from endless loops and high insertion latency, even high risks of re-construction of entire hash table. In order to address these problems, we propose a cost-efficient cuckoo hashing scheme, called MinCounter. The idea behind MinCounter is to alleviate the occurrence of endless loops in the data insertion by selecting unbusy kicking-out routes. MinCounter selects the “cold” (infrequently accessed), rather than random, buckets to handle hash collisions. We further improve the concurrency of the MinCounter scheme to pursue higher performance and adapt to concurrent applications. MinCounter has the salient features of offering efficient insertion and query services and delivering high performance of cloud servers, as well as enhancing the experiences for cloud users. We have implemented MinCounter in a large-scale cloud testbed and examined the performance by using three real-world traces. Extensive experimental results demonstrate the efficacy and efficiency of MinCounter.",
"title": ""
},
{
"docid": "c279eb1ca03937d8321beb4c3c448e81",
"text": "This paper describes the development processes for a cross-platform ubiquitous language learning service via interactive television (iTV) and mobile phone. Adapting a learner-centred design methodology, a number of requirements were gathered from multiple sources that were subsequently used in TAMALLE (television and mobile phone assisted language learning environment) development.Anumber of issues that arise in the context of cross-platform user interface design and architecture for ubiquitous language learning were tackled. Finally, we discuss a multi-method evaluation regime to gauge usability, perceived usefulness and desirability of TAMALLE system. The result broadly revealed an overall positive response from language learners. Although, there were some reported difficulties in reading text and on-screen display mainly on the iTV side of the interface, TAMALLE was perceived to be a usable, useful and desirable tool to support informal language learning and also for gaining new contextual and cultural knowledge.",
"title": ""
},
{
"docid": "96a96b056a1c49d09d1ef6873eb80c6f",
"text": "Raman and Grossmann [Raman, R., & Grossmann, I.E. (1994). Modeling and computational techniques for logic based integer programming. Computers and Chemical Engineering, 18(7), 563–578] and Lee and Grossmann [Lee, S., & Grossmann, I.E. (2000). New algorithms for nonlinear generalized disjunctive programming. Computers and Chemical Engineering, 24, 2125–2141] have developed a reformulation of Generalized Disjunctive Programming (GDP) problems that is based on determining the convex hull of each disjunction. Although the with the quires n order to hod relies m an LP else until ng, retrofit utting plane",
"title": ""
},
{
"docid": "05874da7b27475377dcd8f7afdd1bc5a",
"text": "The main aim of this paper is to provide automatic irrigation to the plants which helps in saving money and water. The entire system is controlled using 8051 micro controller which is programmed as giving the interrupt signal to the sprinkler.Temperature sensor and humidity sensor are connected to internal ports of micro controller via comparator,When ever there is a change in temperature and humidity of the surroundings these sensors senses the change in temperature and humidity and gives an interrupt signal to the micro-controller and thus the sprinkler is activated.",
"title": ""
},
{
"docid": "7e91815398915670fadba3c60e772d14",
"text": "Online reviews are valuable resources not only for consumers to make decisions before purchase, but also for providers to get feedbacks for their services or commodities. In Aspect Based Sentiment Analysis (ABSA), it is critical to identify aspect categories and extract aspect terms from the sentences of user-generated reviews. However, the two tasks are often treated independently, even though they are closely related. Intuitively, the learned knowledge of one task should inform the other learning task. In this paper, we propose a multi-task learning model based on neural networks to solve them together. We demonstrate the improved performance of our multi-task learning model over the models trained separately on three public dataset released by SemEval work-",
"title": ""
},
{
"docid": "4c4376a25aa61e891294708b753dcfec",
"text": "Ransomware, a class of self-propagating malware that uses encryption to hold the victims’ data ransom, has emerged in recent years as one of the most dangerous cyber threats, with widespread damage; e.g., zero-day ransomware WannaCry has caused world-wide catastrophe, from knocking U.K. National Health Service hospitals offline to shutting down a Honda Motor Company in Japan [1]. Our close collaboration with security operations of large enterprises reveals that defense against ransomware relies on tedious analysis from high-volume systems logs of the first few infections. Sandbox analysis of freshly captured malware is also commonplace in operation. We introduce a method to identify and rank the most discriminating ransomware features from a set of ambient (non-attack) system logs and at least one log stream containing both ambient and ransomware behavior. These ranked features reveal a set of malware actions that are produced automatically from system logs, and can help automate tedious manual analysis. We test our approach using WannaCry and two polymorphic samples by producing logs with Cuckoo Sandbox during both ambient, and ambient plus ransomware executions. Our goal is to extract the features of the malware from the logs with only knowledge that malware was present. We compare outputs with a detailed analysis of WannaCry allowing validation of the algorithm’s feature extraction and provide analysis of the method’s robustness to variations of input data—changing quality/quantity of ambient data and testing polymorphic ransomware. Most notably, our patterns are accurate and unwavering when generated from polymorphic WannaCry copies, on which 63 (of 63 tested) antivirus (AV) products fail.",
"title": ""
},
{
"docid": "81384d801ba37feaca150eca5621afbb",
"text": "Next-generation sequencing technologies have had a dramatic impact in the field of genomic research through the provision of a low cost, high-throughput alternative to traditional capillary sequencers. These new sequencing methods have surpassed their original scope and now provide a range of utility-based applications, which allow for a more comprehensive analysis of the structure and content of microbial genomes than was previously possible. With the commercialization of a third generation of sequencing technologies imminent, we discuss the applications of current next-generation sequencing methods and explore their impact on and contribution to microbial genome research.",
"title": ""
},
{
"docid": "b29caaa973e60109fbc2f68e0eb562a6",
"text": "This correspondence introduces a new approach to characterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classification of twenty-five natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) reflected a specific scale and orientation sensitivity. Wavelet packet representations for twenty-five natural textures were classified without error by a simple two-layer network classifier. An analyzing function of large regularity ( 0 2 0 ) was shown to be slightly more efficient in representation and discrimination than a similar function with fewer vanishing moments (Ds) . In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classification without error for the twenty-five textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are beneficial for accomplishing segmentation, classification and subtle discrimination of texture.",
"title": ""
},
{
"docid": "8efe66661d6c1bb7e96c4c2cb2fbdeec",
"text": "IT Leader Sample SIC Code Control Sample SIC Code Consol Energy Inc 1220 Walter Energy Inc 1220 Halliburton Co 1389 Schlumberger Ltd 1389 Standard Pacific Corp 1531 M/I Homes Inc 1531 Beazer Homes USA Inc 1531 Hovnanian Entrprs Inc -Cl A 1531 Toll Brothers Inc 1531 MDC Holdings Inc 1531 D R Horton Inc 1531 Ryland Group Inc 1531 Lennar Corp 1531 KB Home 1531 Granite Construction Inc 1600 Empresas Ica Soc Ctl ADR 1600 Fluor Corp 1600 Alstom ADR 1600 Gold Kist Inc 2015 Sadia Sa ADR 2015 Kraft Foods Inc 2000 ConAgra Foods Inc 2000 Smithfield Foods Inc 2011 Hormel Foods Corp 2011 Campbell Soup Co 2030 Heinz (H J) Co 2030 General Mills Inc 2040 Kellogg Co 2040 Imperial Sugar Co 2060 Wrigley (Wm) Jr Co 2060 Hershey Co 2060 Tate & Lyle Plc ADR 2060 Molson Coors Brewing Co 2082 Comp Bebidas Americas ADR 2082 Constellation Brands Cl A 2084 Gruma S.A.B. de C.V. ADR B 2040 Brown-Forman Cl B 2085 Coca Cola Hellenic Bttlg ADR 2086",
"title": ""
},
{
"docid": "6fcaea5228ea964854ab92cca69859d7",
"text": "The well-characterized cellular and structural components of the kidney show distinct regional compositions and distribution of lipids. In order to more fully analyze the renal lipidome we developed a matrix-assisted laser desorption/ionization mass spectrometry approach for imaging that may be used to pinpoint sites of changes from normal in pathological conditions. This was accomplished by implanting sagittal cryostat rat kidney sections with a stable, quantifiable and reproducible uniform layer of silver using a magnetron sputtering source to form silver nanoparticles. Thirty-eight lipid species including seven ceramides, eight diacylglycerols, 22 triacylglycerols, and cholesterol were detected and imaged in positive ion mode. Thirty-six lipid species consisting of seven sphingomyelins, 10 phosphatidylethanolamines, one phosphatidylglycerol, seven phosphatidylinositols, and 11 sulfatides were imaged in negative ion mode for a total of seventy-four high-resolution lipidome maps of the normal kidney. Thus, our approach is a powerful tool not only for studying structural changes in animal models of disease, but also for diagnosing and tracking stages of disease in human kidney tissue biopsies.",
"title": ""
},
{
"docid": "6ced60cadf69a3cd73bcfd6a3eb7705e",
"text": "This review article summarizes the current literature regarding the analysis of running gait. It is compared to walking and sprinting. The current state of knowledge is presented as it fits in the context of the history of analysis of movement. The characteristics of the gait cycle and its relationship to potential and kinetic energy interactions are reviewed. The timing of electromyographic activity is provided. Kinematic and kinetic data (including center of pressure measurements, raw force plate data, joint moments, and joint powers) and the impact of changes in velocity on these findings is presented. The status of shoewear literature, alterations in movement strategies, the role of biarticular muscles, and the springlike function of tendons are addressed. This type of information can provide insight into injury mechanisms and training strategies. Copyright 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "5aed256aaca0a1f2fe8a918e6ffb62bd",
"text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: schangpi@google.com Wei-Lun Chao Cornell University, Department of Computer Science E-mail: weilunchao760414@gmail.com Boqing Gong Tencent AI Lab E-mail: boqinggo@outlook.com Fei Sha University of Southern California, Department of Computer Science E-mail: feisha@usc.edu",
"title": ""
},
{
"docid": "6c17e311ff57efd4cce31416bf6ace54",
"text": "Demands faced by health care professionals include heavy caseloads, limited control over the work environment, long hours, as well as organizational structures and systems in transition. Such conditions have been directly linked to increased stress and symptoms of burnout, which in turn, have adverse consequences for clinicians and the quality of care that is provided to patients. Consequently, there exists an impetus for the development of curriculum aimed at fostering wellness and the necessary self-care skills for clinicians. This review will examine the potential benefits of mindfulness-based stress reduction (MBSR) programs aimed at enhancing well-being and coping with stress in this population. Empirical evidence indicates that participation in MBSR yields benefits for clinicians in the domains of physical and mental health. Conceptual and methodological limitations of the existing studies and suggestions for future research are discussed.",
"title": ""
},
{
"docid": "ca20d27b1e6bfd1f827f967473d8bbdd",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "4d502d1fbcdc5ea30bf54b43daa33352",
"text": "This paper investigates linearity enhancements in GaN based Doherty power amplifiers (DPA) with the implementation of forward gate current blocking. Using a simple p-n diode to limit gate current, both open loop and digitally pre-distorted (DPD) linearity for wideband, high peak to average ratio modulated signals, such as LTE, are improved. Forward gate current blocking (FCB) is compatible with normally-on III-V HEMT technology where positive gate current is observed which results in nonlinear operation of RF transistor. By blocking positive gate current, waveform clipping is mitigated at the device gate node. Consequently, through dynamic biasing, the effective gate bias at the transistor input is adjusted limiting the RF input signal peaks entering the non-linear regime of the gate Schottky diode inherent to GaN devices. The proposed technique demonstrates more than a 3 dBc improvement in DPD corrected linearity in adjacent channels when four 20 MHz LTE carriers are applied.",
"title": ""
},
{
"docid": "81126b57a29b4c9aee46ecb04c7f43ca",
"text": "Within the field of bibliometrics, there is sustained interest in how nations “compete” in terms of academic disciplines, and what determinants explain why countries may have a specific advantage in one discipline over another. However, this literature has not, to date, presented a comprehensive structured model that could be used in the interpretation of a country’s research profile and aca‐ demic output. In this paper, we use frameworks from international business and economics to pre‐ sent such a model. Our study makes four major contributions. First, we include a very wide range of countries and disci‐ plines, explicitly including the Social Sciences, which unfortunately are excluded in most bibliometrics studies. Second, we apply theories of revealed comparative advantage and the competitive ad‐ vantage of nations to academic disciplines. Third, we cluster our 34 countries into five different groups that have distinct combinations of revealed comparative advantage in five major disciplines. Finally, based on our empirical work and prior literature, we present an academic diamond that de‐ tails factors likely to explain a country’s research profile and competitiveness in certain disciplines.",
"title": ""
},
{
"docid": "dba3f0c314dfdb8e1577c48a56bec077",
"text": "The deterministic annealing approach to clustering and its extensions has demonstrated substantial performance improvement over standard supervised and unsupervised learning methods in a variety of important applications including compression, estimation, pattern recognition and classification, and statistical regression. The method offers three important features: 1) the ability to avoid many poor local optima; 2) applicability to many different structures/architectures; and 3) the ability to minimize the right cost function even when its gradients vanish almost everywhere, as in the case of the empirical classification error. It is derived within a probabilistic framework from basic information theoretic principles (e.g., maximum entropy and random coding). The application-specific cost is minimized subject to a constraint on the randomness (Shannon entropy) of the solution, which is gradually lowered. We emphasize intuition gained from analogy to statistical physics, where this is an annealing process that avoids many shallow local minima of the specified cost and, at the limit of zero “temperature,” produces a nonrandom (hard) solution. Alternatively, the method is derived within rate-distortion theory, where the annealing process is equivalent to computation of Shannon’s rate-distortion function, and the annealing temperature is inversely proportional to the slope of the curve. This provides new insights into the method and its performance, as well as new insights into rate-distortion theory itself. The basic algorithm is extended by incorporating structural constraints to allow optimization of numerous popular structures including vector quantizers, decision trees, multilayer perceptrons, radial basis functions, and mixtures of experts. Experimental results show considerable performance gains over standard structure-specific and application-specific training methods. The paper concludes with a brief discussion of extensions of the method that are currently under investigation.",
"title": ""
}
] |
scidocsrr
|
fed602a6f5f9fd625f2c3c12238fcb04
|
New diagnostic criteria and severity assessment of acute cholangitis in revised Tokyo guidelines
|
[
{
"docid": "84dbdf4c145fc8213424f6d51550faa9",
"text": "Because acute cholangitis sometimes rapidly progresses to a severe form accompanied by organ dysfunction, caused by the systemic inflammatory response syndrome (SIRS) and/or sepsis, prompt diagnosis and severity assessment are necessary for appropriate management, including intensive care with organ support and urgent biliary drainage in addition to medical treatment. However, because there have been no standard criteria for the diagnosis and severity assessment of acute cholangitis, practical clinical guidelines have never been established. The aim of this part of the Tokyo Guidelines is to propose new criteria for the diagnosis and severity assessment of acute cholangitis based on a systematic review of the literature and the consensus of experts reached at the International Consensus Meeting held in Tokyo 2006. Acute cholangitis can be diagnosed if the clinical manifestations of Charcot's triad, i.e., fever and/or chills, abdominal pain (right upper quadrant or epigastric), and jaundice are present. When not all of the components of the triad are present, then a definite diagnosis can be made if laboratory data and imaging findings supporting the evidence of inflammation and biliary obstruction are obtained. The severity of acute cholangitis can be classified into three grades, mild (grade I), moderate (grade II), and severe (grade III), on the basis of two clinical factors, the onset of organ dysfunction and the response to the initial medical treatment. \"Severe (grade III)\" acute cholangitis is defined as acute cholangitis accompanied by at least one new-onset organ dysfunction. \"Moderate (grade II)\" acute cholangitis is defined as acute cholangitis that is unaccompanied by organ dysfunction, but that does not respond to the initial medical treatment, with the clinical manifestations and/or laboratory data not improved. \"Mild (grade I)\" acute cholangitis is defined as acute cholangitis that responds to the initial medical treatment, with the clinical findings improved.",
"title": ""
}
] |
[
{
"docid": "d977a769528fc2ffd9b622a1a1e9f0d4",
"text": "This chapter is to provide a tutorial and pointers to results and related work on timed automata with a focus on semantical and algorithmic aspects of verification tools. We present the concrete and abstract semantics of timed automata (based on transition rules, regions and zones), decision problems, and algorithms for verification. A detailed description on DBM (Difference Bound Matrices) is included, which is the central data structure behind several verification tools for timed systems. As an example, we give a brief introduction to the tool UPPAAL.",
"title": ""
},
{
"docid": "93d8b8afe93d10e54bf4a27ba3b58220",
"text": "Researchers interested in emotion have long struggled with the problem of how to elicit emotional responses in the laboratory. In this article, we summarise five years of work to develop a set of films that reliably elicit each of eight emotional states (amusement, anger, contentment, disgust, fear, neutral, sadness, and surprise). After evaluating over 250 films, we showed selected film clips to an ethnically diverse sample of 494 English-speaking subjects. We then chose the two best films for each of the eight target emotions based on the intensity and discreteness of subjects' responses to each film. We found that our set of 16 films successfully elicited amusement, anger, contentment. disgust, sadness, surprise, a relatively neutral state, and, to a lesser extent, fear. We compare this set of films with another set recently described by Philippot (1993), and indicate that detailed instructions for creating our set of film stimuli will be provided on request.",
"title": ""
},
{
"docid": "e946deae6e1d441c152dca6e52268258",
"text": "The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.",
"title": ""
},
{
"docid": "d54aff38bab1a8877877ddba9e20e88d",
"text": "SiMultaneous Acquisition of Spatial Harmonics (SMASH) is a new fast-imaging technique that increases MR image acquisition speed by an integer factor over existing fast-imaging methods, without significant sacrifices in spatial resolution or signal-to-noise ratio. Image acquisition time is reduced by exploiting spatial information inherent in the geometry of a surface coil array to substitute for some of the phase encoding usually produced by magnetic field gradients. This allows for partially parallel image acquisitions using many of the existing fast-imaging sequences. Unlike the data combination algorithms of prior proposals for parallel imaging, SMASH reconstruction involves a small set of MR signal combinations prior to Fourier transformation, which can be advantageous for artifact handling and practical implementation. A twofold savings in image acquisition time is demonstrated here using commercial phased array coils on two different MR-imaging systems. Larger time savings factors can be expected for appropriate coil designs.",
"title": ""
},
{
"docid": "3c1bb5a1ef9c108754bf5594d1bc3ff6",
"text": "Over the last few decades, the advances in disciplines such as neuroscience and engineering have introduced the brain-computer interface (BCI) as a promising tool for neurorehabilitation and neurophysiology research. BCI research primarily aims at development of assistive and rehabilitation strategies for motor-impaired users and, hence, sensorimotor-rhythm (SMR)based BCIs are widely explored. In this article, we provide a summary of recent advances in motor control BCI, specifically in movement kinematics decoding and motor control of localized areas of the limb. We further discuss the research challenges and future scope of work in this area of research.",
"title": ""
},
{
"docid": "79f1473d4eb0c456660543fda3a648f1",
"text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.",
"title": ""
},
{
"docid": "13e2b22875e1a23e9e8ea2f80671c74e",
"text": "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.",
"title": ""
},
{
"docid": "004da753abb6cb84f1ba34cfb4dacc67",
"text": "The aim of this study was to present a method for endodontic management of a maxillary first molar with unusual C-shaped morphology of the buccal root verified by cone-beam computed tomography (CBCT) images. This rare anatomical variation was confirmed using CBCT, and nonsurgical endodontic treatment was performed by meticulous evaluation of the pulpal floor. Posttreatment image revealed 3 independent canals in the buccal root obturated efficiently to the accepted lengths in all 3 canals. Our study describes a unique C-shaped variation of the root canal system in a maxillary first molar, involving the 3 buccal canals. In addition, our study highlights the usefulness of CBCT imaging for accurate diagnosis and management of this unusual canal morphology.",
"title": ""
},
{
"docid": "660a14e0b194621898d0492b6db3ea09",
"text": "Machine vision-based PCB defect inspection system is designed to meet high speed and high precision requirement in PCB manufacture industry field, which is the combination of software and hardware. This paper firstly introduced the whole system structure and the principle of vision detection, while described the relevant key technologies used during the PCB defect inspection, finally implemented one set of test system with the key technologies mentioned. The experimental results show that the defect of PCB can be effectively inspected, located and recognized with the key technologies.",
"title": ""
},
{
"docid": "61a6efb791fbdabfa92448cf39e17e8c",
"text": "This work deals with the design of a wideband microstrip log periodic array operating between 4 and 18 GHz (thus working in C,X and Ku bands). A few studies, since now, have been proposed but they are significantly less performing and usually quite complicated. Our solution is remarkably simple and shows both SWR and gain better than likely structures proposed in the literature. The same antenna can also be used as an UWB antenna. The design has been developed using CST MICROWAVE STUDIO 2009, a general purpose and specialist tool for the 3D electromagnetic simulation of microwave high frequency components.",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "4dc2f37a8e4aa1a5968233e2d7b0f12b",
"text": "Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.",
"title": ""
},
{
"docid": "5cc7f7aae87d95ea38c2e5a0421e0050",
"text": "Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work after integration of all parts; to reduce the dependencies between the parts of system; and to prevent the duplication of parts in the system. [Qurashi SA, Qureshi MRJ. Scrum of Scrums Solution for Large Size Teams Using Scrum Methodology. Life Sci J 2014;11(8):443-449]. (ISSN:1097-8135). http://www.lifesciencesite.com. 58",
"title": ""
},
{
"docid": "3c5e3f2fe99cb8f5b26a880abfe388f8",
"text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.",
"title": ""
},
{
"docid": "76e15dd4090301ec855fdc3e22ff238f",
"text": "Robert Godwin-Jones Virginia Commonwealth University It wasn’t that long ago that the most exciting thing you could so with your new mobile phone was to download a ringtone. Today, new iPhone or Android phone users face the quandary of which of the hundreds of thousands of apps (applications) they should choose. It seems that everyone from federal government agencies to your local bakery has an app available. This phenomenon, not surprisingly has led to tremendous interest among educators. Mobile learning (often “m-learning”) is in itself not new, but new devices with enhanced capabilities have dramatically increased the interest level, including among language educators. The Apple iPad and other new tablet computers are adding to the mobile app frenzy. In this column we will explore the state of language learning apps, the devices they run on, and how they are developed.",
"title": ""
},
{
"docid": "885e475070fd801bde36a1fdc9852489",
"text": "Motor systems are very important in modern society. They convert almost 60% of the electricity produced in the U.S. into other forms of energy to provide power to other equipment. In the performance of all motor systems, bearings play an important role. Many problems arising in motor operations are linked to bearing faults. In many cases, the accuracy of the instruments and devices used to monitor and control the motor system is highly dependent on the dynamic performance of the motor bearings. Thus, fault diagnosis of a motor system is inseparably related to the diagnosis of the bearing assembly. In this paper, bearing vibration frequency features are discussed for motor bearing fault diagnosis. This paper then presents an approach for motor rolling bearing fault diagnosis using neural networks and time/frequency-domain bearing vibration analysis. Vibration simulation is used to assist in the design of various motor rolling bearing fault diagnosis strategies. Both simulation and real-world testing results obtained indicate that neural networks can be effective agents in the diagnosis of various motor bearing faults through the measurement and interpretation of motor bearing vibration signatures.",
"title": ""
},
{
"docid": "5e3375e88ada445d23082fb72f6a1dfd",
"text": "This paper considers recursive tracking of one mobile emitter using a sequence of time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurement pairs obtained by one pair of sensors. We consider only a single emitter without data association issues (no missed detections or false measurements). Each TDOA measurement defines a region of possible emitter locations around a unique hyperbola. This likelihood function is approximated by a Gaussian mixture, which leads to a dynamic bank of Kalman filters tracking algorithm. The FDOA measurements update relative probabilities and estimates of individual Kalman filters. This approach results in a better track state probability density function approximation by a Gaussian mixture, and tracking results near the Cramer-Rao lower bound. Proposed algorithm is also applicable in other cases of nonlinear information fusion. The performance of proposed Gaussian mixture approach is evaluated using a simulation study, and compared with a bank of EKF filters and the Cramer-Rao lower bound.",
"title": ""
},
{
"docid": "6a3695fd6a358fa39a2641a478caf38c",
"text": "With the increase in the number of vehicles, many intelligent systems have been developed to help drivers to drive safely. Lane detection is a crucial element of any driver assistance system. At present, researchers working on lane detection are confronted with several major challenges, such as attaining robustness to inconsistencies in lighting and background clutter. To address these issues in this work, we propose a method named Lane Detection with Two-stage Feature Extraction (LDTFE) to detect lanes, whereby each lane has two boundaries. To enhance robustness, we take lane boundary as collection of small line segments. In our approach, we apply a modified HT (Hough Transform) to extract small line segments of the lane contour, which are then divided into clusters by using the DBSCAN (Density Based Spatial Clustering of Applications with Noise) clustering algorithm. Then, we can identify the lanes by curve fitting. The experimental results demonstrate that our modified HT works better for LDTFE than LSD (Line Segment Detector). Through extensive experiments, we demonstrate the outstanding performance of our method on the challenging dataset of road images compared with state-of-the-art lanedetection methods. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fc32e7b46094c1cfe878c8324b91fcf2",
"text": "The recent increase in information technologies dedicated to optimal design, associated with the progress of the numerical tools for predicting ship hydrodynamic performances, allows significant improvement in ship design. A consortium of fourteen European partners – bringing together ship yards, model basins, consultants, research centres and universities – has therefore conducted a three years European R&D project (FANTASTIC) with the goal to improve the functional design of ship hull shapes. The following key issues were thus considered: parametric shape modelling was worked on through several complementary approaches, CFD tools and associated interfaces were enhanced to meet efficiency and robustness requirements, appropriate design space exploration and optimisation techniques were investigated. The resulting procedures where then implemented, for practical assessment purposes, in some end-users design environments, and a number of applications were undertaken.. Significant gains can be expected from this approach in design, in term of time used for performance analysis and explored range of design variations.",
"title": ""
},
{
"docid": "6b85a2e17f4fe6073527ddf2d1f4d4c1",
"text": "EU policies call for the strengthening of Europe's innovative capacity and the development of a creative and knowledge-intensive economy and society through reinforcing the role of education and training in the knowledge triangle and focusing school curricula on creativity, innovation and entrepreneurship. This report brings evidence to the debate on the status, barriers and enablers for creativity and innovation in compulsory schooling in Europe. It is the final report of the project: ‘Creativity and Innovation in Education and Training in the EU27 (ICEAC)’ carried out by IPTS in collaboration with DG Education and Culture, highlighting the main messages gathered from each phase of the study: a literature review, a survey with teachers, an analysis of curricula and of good practices, stakeholder and expert interviews, and experts workshops. Based on this empirical material, five major areas for improvement are proposed to enable more creative learning and innovative teaching in Europe: curricula, pedagogies and assessment, teacher training, ICT and digital media, and educational culture and leadership. The study highlights the need for action at both national and European level to bring about the changes required for an open and innovative European educational culture based on the creative and innovative potential of its future generation. How to obtain EU publications Our priced publications are available from EU Bookshop (http://bookshop.europa.eu), where you can place an order with the sales agent of your choice. The Publications Office has a worldwide network of sales agents. You can obtain their contact details by sending a fax to (352) 29 29-42758. The mission of the Joint Research Centre is to provide customer-driven scientific and technical support for the conception, development, implementation and monitoring of European Union policies. As a service of the European Commission, the Joint Research Centre functions as a reference centre of science and technology for the Union. Close to the policy-making process, it serves the common interest of the Member States, while being independent of special interests, whether private or national. LF-N A -275-EN -C",
"title": ""
}
] |
scidocsrr
|
99cafc2b1c623ef6e8f612225d9dad95
|
Robust Predictive Control for semi-autonomous vehicles with an uncertain driver model
|
[
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
}
] |
[
{
"docid": "e3e7cc9c45d1126adb81f5b02b7afa2e",
"text": "This paper proposes a signal-to-noise-ratio (SNR) aware convolutional neural network (CNN) model for speech enhancement (SE). Because the CNN model can deal with local temporal-spectral structures of speech signals, it can effectively disentangle the speech and noise signals given the noisy speech signals. In order to enhance the generalization capability and accuracy, we propose two SNR-aware algorithms for CNN modeling. The first algorithm employs a multi-task learning (MTL) framework, in which restoring clean speech and estimating SNR level are formulated as the main and the secondary tasks, respectively, given the noisy speech input. The second algorithm is an SNR adaptive denoising, in which the SNR level is explicitly predicted in the first step, and then an SNR-dependent CNN model is selected for denoising. Experiments were carried out to test the two SNR-aware algorithms for CNN modeling. Results demonstrate that CNN with the two proposed SNR-aware algorithms outperform the deep neural network counterpart in terms of standardized objective evaluations when using the same number of layers and nodes. Moreover, the SNR-aware algorithms can improve the denoising performance with unseen SNR levels, suggesting their promising generalization capability for real-world applications.",
"title": ""
},
{
"docid": "6147c993e4c7f5b9daf18f99c374b129",
"text": "We propose an efficient text summarization technique that involves two basic operations. The first operation involves finding coherent chunks in the document and the second operation involves ranking the text in the individual coherent chunks and picking the sentences that rank above a given threshold. The coherent chunks are formed by exploiting the lexical relationship between adjacent sentences in the document. Occurrence of words through repetition or relatedness by sense relation plays a major role in forming a cohesive tie. The proposed text ranking approach is based on a graph theoretic ranking model applied to text summarization task.",
"title": ""
},
{
"docid": "a5763d4909edb39b421272be2a546e82",
"text": "We summarize all available amphibian and reptile species distribution data from the northeast Mindanao faunal region, including small islands associated with this subcenter of endemic vertebrate biodiversity. Together with all publicly available historical information from biodiversity repositories, we present new data from several major herpetological surveys, including recently conducted inventories on four major mountains of northeast Mindanao, and adjacent islands of Camiguin Sur, Dinagat, and Siargao. We present species accounts for all taxa, comment on unresolved taxonomic problems, and provide revisions to outdated IUCN conservation status assessments in cases where our new data significantly alter earlier classification status summaries. Together, our comprehensive analysis of this fauna suggests that the greater Mindanao faunal region possesses distinct subcenters of amphibian and reptile species diversity, and that until this area is revisited and its fauna and actually studied, with on-the-ground field work including targeted surveys of species distributions coupled to the study their natural history, our understanding of the diversity and conservation status of southern Philippine herpetological fauna will remain incomplete. Nevertheless, the northeast Mindanao geographical area (Caraga Region) appears to have the highest herpetological species diversity (at least 126 species) of any comparably-sized Philippine faunal subregion.",
"title": ""
},
{
"docid": "8d0066400985b2577f4fbe8013d5ba1d",
"text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective counter-measures have drawn significant investment from governments, companies, and empirical research. Despite a large number of emerging scientific studies to address the problem, a major limitation of existing work is the lack of comparative evaluations, which makes it difficult to assess the contribution of individual works. This paper introduces a new method based on a deep neural network combining convolutional and gated recurrent networks. We conduct an extensive evaluation of the method against several baselines and state of the art on the largest collection of publicly available Twitter datasets to date, and show that compared to previously reported results on these datasets, our proposed method is able to capture both word sequence and order information in short texts, and it sets new benchmark by outperforming on 6 out of 7 datasets by between 1 and 13 percents in F1. We also extend the existing dataset collection on this task by creating a new dataset covering different topics.",
"title": ""
},
{
"docid": "54093733f08ced4d9e3a5362235bd944",
"text": "Tumour-suppressor genes are indispensable for the maintenance of genomic integrity. Recently, several of these genes, including those encoding p53, PTEN, RB1 and ARF, have been implicated in immune responses and inflammatory diseases. In particular, the p53 tumour- suppressor pathway is involved in crucial aspects of tumour immunology and in homeostatic regulation of immune responses. Other studies have identified roles for p53 in various cellular processes, including metabolism and stem cell maintenance. Here, we discuss the emerging roles of p53 and other tumour-suppressor genes in tumour immunology, as well as in additional immunological settings, such as virus infection. This relatively unexplored area could yield important insights into the homeostatic control of immune cells in health and disease and facilitate the development of more effective immunotherapies. Consequently, tumour-suppressor genes are emerging as potential guardians of immune integrity.",
"title": ""
},
{
"docid": "0f9e15b890aa9c1e7cf7276fb54f83f3",
"text": "While image inpainting has recently become widely available in image manipulation tools, existing approaches to video inpainting typically do not even achieve interactive frame rates yet as they are highly computationally expensive. Further, they either apply severe restrictions on the movement of the camera or do not provide a high-quality coherent video stream. In this paper we will present our approach to high-quality real-time capable image and video inpainting. Our PixMix approach even allows for the manipulation of live video streams, providing the basis for real Diminished Reality (DR) applications. We will show how our approach generates coherent video streams dealing with quite heterogeneous background environments and non-trivial camera movements, even applying constraints in real-time.",
"title": ""
},
{
"docid": "b0d959bdb58fbcc5e324a854e9e07b81",
"text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.",
"title": ""
},
{
"docid": "d0ddc8f2efbdd7d7b6ffda32e2726d87",
"text": "Violence in video games has come under increasing research attention over the past decade. Researchers in this area have suggested that violent video games may cause aggressive behavior among players. However, the state of the extant literature has not yet been examined for publication bias. The current meta-analysis is designed to correct for this oversight. Results indicated that publication bias does exist for experimental studies of aggressive behavior, as well as for non-experimental studies of aggressive behavior and aggressive thoughts. Research in other areas, including prosocial behavior and experimental studies of aggressive thoughts were less susceptible to publication bias. Moderator effects results also suggested that studies employing less standardized and reliable measures of aggression tended to produce larger effect sizes. Suggestions for future violent video game studies are provided. © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1e30732092d2bcdeff624364c27e4c9c",
"text": "Beliefs that individuals hold about whether emotions are malleable or fixed, also referred to as emotion malleability beliefs, may play a crucial role in individuals' emotional experiences and their engagement in changing their emotions. The current review integrates affective science and clinical science perspectives to provide a comprehensive review of how emotion malleability beliefs relate to emotionality, emotion regulation, and specific clinical disorders and treatment. Specifically, we discuss how holding more malleable views of emotion could be associated with more active emotion regulation efforts, greater motivation to engage in active regulatory efforts, more effort expended regulating emotions, and lower levels of pathological distress. In addition, we explain how extending emotion malleability beliefs into the clinical domain can complement and extend current conceptualizations of major depressive disorder, social anxiety disorder, and generalized anxiety disorder. This may prove important given the increasingly central role emotion dysregulation has been given in conceptualization and intervention for these psychiatric conditions. Additionally, discussion focuses on how emotion beliefs could be more explicitly addressed in existing cognitive therapies. Promising future directions for research are identified throughout the review.",
"title": ""
},
{
"docid": "4af5b29ebda47240d51cd5e7765d990f",
"text": "In this paper, a Rectangular Waveguide (RW) to microstrip transition with Low-Temperature Co-fired Ceramic (LTCC) technology in Ka-band is designed, fabricated and measured. Compared to the traditional transition using a rectangular slot, the proposed Stepped-Impedance Resonator (SIR) slot enlarges the bandwidth of the transition. By introducing an additional design parameter, it generates multi-modes within the transition. To further improve the bandwidth and to adjust the performance of the transition, a resonant strip is embedded between the open microstrip line and its ground plane. Measured results agree well with that of the simulation, showing an effective bandwidth about 22% (from 28.5 GHz to 36.5GHz), an insertion loss approximately 3 dB and return loss better than 15 dB in the pass-band.",
"title": ""
},
{
"docid": "d87730770e080ee926a4859e421d4309",
"text": "The term metastasis is widely used to describe the endpoint of the process by which tumour cells spread from the primary location to an anatomically distant site. Achieving successful dissemination is dependent not only on the molecular alterations of the cancer cells themselves, but also on the microenvironment through which they encounter. Here, we reviewed the molecular alterations of metastatic gastric cancer (GC) as it reflects a large proportion of GC patients currently seen in clinic. We hope that further exploration and understanding of the multistep metastatic cascade will yield novel therapeutic targets that will lead to better patient outcomes.",
"title": ""
},
{
"docid": "f40ef2fed11fd2e84c3039134a196c79",
"text": "Nowadays, dye-sensitized solar cells (DSSCs) are the most extensively investigated systems for the conversion of solar energy into electricity, particularly for implementation in devices where low cost and good performance are required. Nevertheless, a key aspect is still to be addressed, being considered strongly harmful for a long time, which is the presence of water in the cell, either in the electrolyte or at the electrode/electrolyte interface. Here comes the present review, in the course of which we try our best to address the highly topical role of water in DSSCs, trying to figure out if it is a poisoner or the keyword to success, by means of a thoroughly detailed analysis of all the established phenomena in an aqueous environment. Actually, in the last few years the scientific community has suddenly turned its efforts in the direction of using water as a solvent, as demonstrated by the amount of research articles being published in the literature. Indeed, by means of DSSCs fabricated with water-based electrolytes, reduced costs, non-flammability, reduced volatility and improved environmental compatibility could be easily achieved. As a result, an increasing number of novel electrodes, dyes and electrolyte components are continuously proposed, being highly challenging from the materials science viewpoint and with the golden thread of producing truly water-based DSSCs. If the initial purpose of DSSCs was the construction of an artificial photosynthetic system able to convert solar light into electricity, the use of water as the key component may represent a great step forward towards their widespread diffusion in the market.",
"title": ""
},
{
"docid": "99f7aa4a6e3111d18ccbb527d2a9f312",
"text": "This study investigates the development of trust in a Web-based vendor during two stages of a consumer’s Web experience: exploration and commitment. Through an experimental design, the study tests the effects of third party endorsements, reputation, and individual differences on trust in the vendor during these two stages.",
"title": ""
},
{
"docid": "afc12fcceaf1bc1de724ba6e7935c086",
"text": "OLAP tools have been extensively used by enterprises to make better and faster decisions. Nevertheless, they require users to specify group-by attributes and know precisely what they are looking for. This paper takes the first attempt towards automatically extracting top-k insights from multi-dimensional data. This is useful not only for non-expert users, but also reduces the manual effort of data analysts. In particular, we propose the concept of insight which captures interesting observation derived from aggregation results in multiple steps (e.g., rank by a dimension, compute the percentage of measure by a dimension). An example insight is: ``Brand B's rank (across brands) falls along the year, in terms of the increase in sales''. Our problem is to compute the top-k insights by a score function. It poses challenges on (i) the effectiveness of the result and (ii) the efficiency of computation. We propose a meaningful scoring function for insights to address (i). Then, we contribute a computation framework for top-k insights, together with a suite of optimization techniques (i.e., pruning, ordering, specialized cube, and computation sharing) to address (ii). Our experimental study on both real data and synthetic data verifies the effectiveness and efficiency of our proposed solution.",
"title": ""
},
{
"docid": "48e925aba276c0f32aca04bcd21123c1",
"text": "The introduction of the processor instructions AES-NI and VPCLMULQDQ, that are designed for speeding up encryption, and their continual performance improvements through processor generations, has significantly reduced the costs of encryption overheads. More and more applications and platforms encrypt all of their data and traffic. As an example, we note the world wide proliferation of the use of AES-GCM, with performance dropping down to 0.64 cycles per byte (from ∼ 23 before the instructions), on the latest Intel processors. This is close to the theoretically achievable performance with the existing hardware support. Anticipating future applications and increasing demand for high performance encryption, Intel has recently announced [1] that its future architecture (codename ”Ice Lake”) will introduce new encryption instructions. These will be able to vectorize the AES-NI and VPCLMULQDQ instructions, on wide registers that are available on the AVX512 architectures. In this paper, we explain how these new instructions can be used effectively, and how properly using them can lead to the anticipated theoretical encryption throughput of around 0.16 cycles per byte. The included examples demonstrate AES encryption in various modes of operation, AEAD such as AES-GCM, and the emerging nonce misuse resistant variant AES-GCM-SIV.",
"title": ""
},
{
"docid": "413d6b01d62148fa86627f7cede5c53a",
"text": "Each day, anti-virus companies receive tens of thousands samples of potentially harmful executables. Many of the malicious samples are variations of previously encountered malware, created by their authors to evade pattern-based detection. Dealing with these large amounts of data requires robust, automatic detection approaches. This paper studies malware classification based on call graph clustering. By representing malware samples as call graphs, it is possible to abstract certain variations away, enabling the detection of structural similarities between samples. The ability to cluster similar samples together will make more generic detection techniques possible, thereby targeting the commonalities of the samples within a cluster. To compare call graphs mutually, we compute pairwise graph similarity scores via graph matchings which approximately minimize the graph edit distance. Next, to facilitate the discovery of similar malware samples, we employ several clustering algorithms, including k-medoids and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Clustering experiments are conducted on a collection of real malware samples, and the results are evaluated against manual classifications provided by human malware analysts. Experiments show that it is indeed possible to accurately detect malware families via call graph clustering. We anticipate that in the future, call graphs can be used to analyse the emergence of new malware families, and ultimately to automate implementation of generic detection schemes.",
"title": ""
},
{
"docid": "073ec1e3b8c6feab18f2ae53eab5cc24",
"text": "Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.",
"title": ""
},
{
"docid": "96d5a007d971c12903abaff2fc739bf2",
"text": "This paper reports on two studies which investigated the relationship between children’s texting behaviour, their knowledge of text abbreviations and their school attainment in written language skills. In Study One, 11–12-year-old children provided information on their texting behaviour. They were also asked to translate a standard English sentence into a text message and vice versa. The children’s standardised verbal and nonverbal reasoning scores were also obtained. Children who used their mobiles to send three or more text messages a day had significantly lower scores than children who sent none. However, the children who, when asked to write a text message, showed greater use of text abbreviations (‘textisms’) tended to have better performance on a measure of verbal reasoning ability, which is highly associated with Key Stage 2 (KS2) and 3 English scores. In Study Two, children’s performance on writing measures was examined more specifically. Ten to eleven-year-old children were asked to complete another English to text message translation exercise. Spelling proficiency was also assessed, and KS2 Writing scores were obtained. Positive correlations between spelling ability and performance on the translation exercise were found, and group-based comparisons based on the children’s writing scores also showed that good writing attainment was associated with greater use of textisms, although the direction of this association is nor clear. Overall, these findings suggest that children’s knowledge of textisms is not associated with poor written language outcomes for children in this age range.",
"title": ""
},
{
"docid": "c02a1c89692d88671f4be454345f3fa3",
"text": "In this study, the resonant analysis and modeling of the microstrip-fed stepped-impedance (SI) slot antenna are presented by utilizing the transmission-line and lumped-element circuit topologies. This study analyzes the SI-slot antenna and systematically summarizes its frequency response characteristics, such as the resonance condition, spurious response, and equivalent circuit. Design formulas with respect to the impedance ratio of the SI slot antenna were analytically derived. The antenna designers can predict the resonant modes of the SI slot antenna without utilizing expensive EM-simulation software.",
"title": ""
},
{
"docid": "2a2497839dafe8c2d2ea2b8404f7444b",
"text": "Face analysis in images in the wild still pose a challenge for automatic age and gender recognition tasks, mainly due to their high variability in resolution, deformation, and occlusion. Although the performance has highly increased thanks to Convolutional Neural Networks (CNNs), it is still far from optimal when compared to other image recognition tasks, mainly because of the high sensitiveness of CNNs to facial variations. In this paper, inspired by biology and the recent success of attention mechanisms on visual question answering and fine-grained recognition, we propose a novel feedforward attention mechanism that is able to discover the most informative and reliable parts of a given face for improving age and gender classification. In particular, given a downsampled facial image, the proposed model is trained based on a novel end-to-end learning framework to extract the most discriminative patches from the original high-resolution image. Experimental validation on the standard Adience, Images of Groups, and MORPH II benchmarks show Preprint submitted to Pattern Recognition June 30, 2017",
"title": ""
}
] |
scidocsrr
|
2afdb32d840d6ead15b6906504d8716a
|
Factors affecting pass-along email intentions (PAEIs): Integrating the social capital and social cognition theories
|
[
{
"docid": "0681860d1be33f7d50c19398ca786582",
"text": "Online social networks are increasingly being recognized as an important source of information influencing the adoption and use of products and services. Viral marketing—the tactic of creating a process where interested people can market to each other—is therefore emerging as an important means to spread-the-word and stimulate the trial, adoption, and use of products and services. Consider the case of Hotmail, one of the earliest firms to tap the potential of viral marketing. Based predominantly on publicity from word-of-mouse [4], the Web-based email service provider garnered one million registered subscribers in its first six months, hit two million subscribers two months later, and passed the eleven million mark in eighteen months [7]. Wired magazine put this growth in perspective in its December 1998 issue: “The Hotmail user base grew faster than [that of ] any media company in history—faster than CNN, faster than AOL, even faster than Seinfeld’s audience. By mid-2000, Hotmail had over 66 million users with 270,000 new accounts being established each day.” While the potential of viral marketing to efficiently reach out to a broad set of potential users is attracting considerable attention, the value of this approach is also being questioned [5]. There needs to be a greater understanding of the contexts in which this strategy works and the characteristics of products and services for which it is most effective. This is particularly important because the inappropriate use of viral marketing can be counterproductive by creating unfavorable attitudes towards products. Work examining this phenomenon currently provides either descriptive accounts of particular initiatives [8] or advice based on anecdotal evidence [2]. What is missing is an analysis of viral marketing that highlights systematic patterns in the nature of knowledge-sharing and persuasion by influencers and responses by recipients in online social networks. To this end, we propose an organizing framework for viral marketing that draws on prior theory and highlights different behavioral mechanisms underlying knowledge-sharing, influence, and compliance in online social networks. Though the framework is descrip-",
"title": ""
},
{
"docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb",
"text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.",
"title": ""
},
{
"docid": "c57cbe432fdab3f415d2c923bea905ff",
"text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.",
"title": ""
}
] |
[
{
"docid": "d86608a8d36c575ab617e9d53403bbbc",
"text": "Performing a complex sequential finger movement requires the temporally well-ordered organization of individual finger movements. Previous behavioural studies have suggested that the brain prepares a whole sequence of movements as a single set, rather than the movements of individual fingers. However, direct neuroimaging support for this hypothesis is lacking and, assuming it to be true, it remains unclear which brain regions represent the information of a prepared sequence. Here, we measured brain activity with functional magnetic resonance imaging while 14 right-handed healthy participants performed two types of well-learned sequential finger movements with their right hands. Using multi-voxel pattern analysis, we examined whether the types of the forthcoming sequence could be predicted from the preparatory activities of nine regions of interest, which included the motor, somatosensory and posterior parietal regions in each hemisphere, bilateral visual cortices, cerebellum and basal ganglia. We found that, during preparation, the activity of the contralateral motor regions could predict which of the two sequences would be executed. Further detailed analysis revealed that the contralateral dorsal premotor cortex and supplementary motor area were the key areas that contributed to the prediction consistently across participants. These contrasted with results from execution-related brain activity where a performed sequence was successfully predicted from the activities in the broad cortical sensory-motor network, including the bilateral motor, parietal and ipsilateral somatosensory cortices. Our study supports the hypothesis that temporary well-organized sequences of movements are represented as a set in the brain, and that preparatory activity in higher-order motor regions represents information about upcoming motor actions.",
"title": ""
},
{
"docid": "90033efd960bf121e7041c9b3cd91cbd",
"text": "In this paper, we propose a novel framework for integrating geometrical measurements of monocular visual simultaneous localization and mapping (SLAM) and depth prediction using a convolutional neural network (CNN). In our framework, SLAM-measured sparse features and CNN-predicted dense depth maps are fused to obtain a more accurate dense 3D reconstruction including scale. We continuously update an initial 3D mesh by integrating accurately tracked sparse features points. Compared to prior work on integrating SLAM and CNN estimates [26], there are two main differences: Using a 3D mesh representation allows as-rigid-as-possible update transformations. We further propose a system architecture suitable for mobile devices, where feature tracking and CNN-based depth prediction modules are separated, and only the former is run on the device. We evaluate the framework by comparing the 3D reconstruction result with 3D measurements obtained using an RGBD sensor, showing a reduction in the mean residual error of 38% compared to CNN-based depth map prediction alone.",
"title": ""
},
{
"docid": "9debe1fbdb49f4224e57ebb0635e2f56",
"text": "INTRODUCTION\nRadial forearm free flap (RFFF) tube-in-tube phalloplasty is the most performed phalloplasty technique worldwide. The conspicuous donor-site scar is a drawback for some transgender men. In search for techniques with less conspicuous donor-sites, we performed a series of one-stage pedicled anterolateral thigh flap (ALT) phalloplasties combined with RFFF urethral reconstruction. In this study, we aim to describe this technique and assess its surgical outcome in a series of transgender men.\n\n\nPATIENTS AND METHODS\nBetween January 2008 and December 2015, nineteen transgender men (median age 37, range 21-57) underwent pedicled ALT phalloplasty combined with RFFF urethral reconstruction in one stage. The surgical procedure was described. Patient demographics, surgical characteristics, intra- and postoperative complications, hospitalization length, and reoperations were recorded.\n\n\nRESULTS\nThe size of the ALT flaps ranged from 12 × 12 to 15 × 13 cm, the size of the RFFFs from 14 × 3 to 17 × 3 cm. Median clinical follow-up was 35 months (range 3-95). Total RFFF failure occurred in two patients, total ALT flap failure in one patient, and partial necrosis of the ALT flap in one patient. Long-term urinary complications occurred in 10 (53%) patients, of which 9 concerned urethral strictures.\n\n\nCONCLUSIONS\nIn experienced hands, one-stage pedicled ALT phalloplasty combined with RFFF urethral reconstruction is a feasible alternative surgical option in eligible transgender men, who desire a less conspicuous forearm scar. Possible drawbacks comprise flap-related complications, difficult inner flap monitoring and urethral complications.",
"title": ""
},
{
"docid": "0a30e4de94a63b9866183ade4204ecd0",
"text": "Pharyngodon medinae García-Calvente, 1948 (Nematoda: Pharyngodonidae) is redescribed from Podarcis pityusensis (Bosca, 1883) (Sauria: Lacertidae) of the Balearic Islands (Spain) and confirmed as a member of the genus Skrjabinodon Inglis, 1968. A systematic review of S. medinae and closely related species is also given. Parathelandros canariensis is referred to Skrjabinodon as a new combination and Parathelandros Magzoub et al., 1980 is dismissed as a junior homonym of Parathelandros Baylis, 1930.",
"title": ""
},
{
"docid": "57622d5e0ff2cee7a07988cf972012cd",
"text": "This paper presents a novel multi-stage stochastic distributed generation investment planning model for making investment decisions under uncertainty. The problem, formulated from a coordinated system planning viewpoint, simultaneously minimizes the net present value of costs rated to losses, emission, operation, and maintenance, as well as the cost of unserved energy. The formulation is anchored on a two-period planning horizon, each having multiple stages. The first period is a short-term horizon in which robust decisions are pursued in the face of uncertainty; whereas, the second one spans over a medium to long-term horizon involving exploratory and/or flexible investment decisions. The operational variability and uncertainty introduced by intermittent generation sources, electricity demand, emission prices, demand growth, and others are accounted for via probabilistic and stochastic methods, respectively. Metrics such as cost of ignoring uncertainty and value of perfect information are used to clearly demonstrate the benefits of the proposed stochastic model. A real-life distribution network system is used as a case study and the results show the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "5b96fcbe3ac61265ef5407f4e248193e",
"text": "Modelling the similarity of sentence pairs is an important problem in natural language processing and information retrieval, with applications in tasks such as paraphrase identification and answer selection in question answering. The Multi-Perspective Convolutional Neural Network (MP-CNN) is a model that improved previous state-of-the-art models in 2015 and has remained a popular model for sentence similarity tasks. However, until now, there has not been a rigorous study of how the model actually achieves competitive accuracy. In this thesis, we report on a series of detailed experiments that break down the contribution of each component of MP-CNN towards its statistical accuracy and how they affect model robustness. We find that two key components of MP-CNN are non-essential to achieve competitive accuracy and they make the model less robust to changes in hyperparameters. Furthermore, we suggest simple changes to the architecture and experimentally show that we improve the accuracy of MP-CNN when we remove these two major components of MP-CNN and incorporate these small changes, pushing its scores closer to more recent works on competitive semantic textual similarity and answer selection datasets, while using eight times fewer parameters.",
"title": ""
},
{
"docid": "bf31bf712d978d16f2b4d2768f8e7354",
"text": "Design/methodology/approach: Both qualitative comparisons of functionality and quantitative comparisons of false positives and false negatives are made for seven different scanners. The quantitative assessment includes data from both authenticated and unauthenticated scans. Experiments were conducted on a computer network of 28 hosts with various operating systems, services and vulnerabilities. This network was set up by a team of security researchers and professionals.",
"title": ""
},
{
"docid": "0dffca7979e72f7bb4b0fd94b031a46f",
"text": "In collaborative filtering approaches, recommendations are inferred from user data. A large volume and a high data quality is essential for an accurate and precise recommender system. As consequence, companies are collecting large amounts of personal user data. Such data is often highly sensitive and ignoring users’ privacy concerns is no option. Companies address these concerns with several risk reduction strategies, but none of them is able to guarantee cryptographic secureness. To close that gap, the present paper proposes a novel recommender system using the advantages of blockchain-supported secure multiparty computation. A potential customer is able to allow a company to apply a recommendation algorithm without disclosing her personal data. Expected benefits are a reduction of fraud and misuse and a higher willingness to share personal data. An outlined experiment will compare users’ privacy-related behavior in the proposed recommender system with existent solutions.",
"title": ""
},
{
"docid": "0c7afb3bee6dd12e4a69632fbdb50ce8",
"text": "OBJECTIVES\nTo systematically review levels of metabolic expenditure and changes in activity patterns associated with active video game (AVG) play in children and to provide directions for future research efforts.\n\n\nDATA SOURCES\nA review of the English-language literature (January 1, 1998, to January 1, 2010) via ISI Web of Knowledge, PubMed, and Scholars Portal using the following keywords: video game, exergame, physical activity, fitness, exercise, energy metabolism, energy expenditure, heart rate, disability, injury, musculoskeletal, enjoyment, adherence, and motivation.\n\n\nSTUDY SELECTION\nOnly studies involving youth (< or = 21 years) and reporting measures of energy expenditure, activity patterns, physiological risks and benefits, and enjoyment and motivation associated with mainstream AVGs were included. Eighteen studies met the inclusion criteria. Articles were reviewed and data were extracted and synthesized by 2 independent reviewers. MAIN OUTCOME EXPOSURES: Energy expenditure during AVG play compared with rest (12 studies) and activity associated with AVG exposure (6 studies).\n\n\nMAIN OUTCOME MEASURES\nPercentage increase in energy expenditure and heart rate (from rest).\n\n\nRESULTS\nActivity levels during AVG play were highly variable, with mean (SD) percentage increases of 222% (100%) in energy expenditure and 64% (20%) in heart rate. Energy expenditure was significantly lower for games played primarily through upper body movements compared with those that engaged the lower body (difference, -148%; 95% confidence interval, -231% to -66%; P = .001).\n\n\nCONCLUSIONS\nThe AVGs enable light to moderate physical activity. Limited evidence is available to draw conclusions on the long-term efficacy of AVGs for physical activity promotion.",
"title": ""
},
{
"docid": "41b16e29baef6f27a03c774657811d5e",
"text": "Pharmacokinetics is a fundamental scientific discipline that underpins applied therapeutics. Patients need to be prescribed appropriate medicines for a clinical condition. The medicine is chosen on the basis of an evidencebased approach to clinical practice and assured to be compatible with any other medicines or alternative therapies the patient may be taking. The design of a dosage regimen is dependent on a basic understanding of the drug use process (DUP). When faced with a patient who shows specific clinical signs and symptoms, pharmacists must always ask a fundamental question: ‘Is this patient suffering from a drug-related problem?’ Once this issue is evaluated and a clinical diagnosis is available, the pharmacist can apply the DUP to ensure that the patient is prescribed an appropriate medication regimen, that the patient understands the therapy prescribed, and that an agreed concordance plan is achieved. Pharmacists using the DUP consider:",
"title": ""
},
{
"docid": "0a4392285df7ddb92458ffa390f36867",
"text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"title": ""
},
{
"docid": "a5ca29b15b805a21b6fdcadeab970d42",
"text": "Local online reviews such as Yelp have become large repositories of information, thus making it difficult for readers to find the most useful content. Our work investigates the factors that influence the readers' judgment of usefulness of restaurant reviews. We focus on assessing the mechanism behind the users' assessment of usefulness of reviews, particularly with respect to reviews provided by reviewers with local knowledge. We collected 160 manual annotations of 36 unique restaurant reviews and we interviewed ten participants. Our results show that users are able to detect reviews written by knowledgeable locals, and they perceive reviews provided by locals more useful not because they provide more valuable content but because local knowledge results in higher trust. We discuss design implications of these findings for helping readers to overcome information overload in local systems.",
"title": ""
},
{
"docid": "b10b42c8fbe13ad8d1d04aec9df12a00",
"text": "As an alternative strategy to antibiotic use in aquatic disease management, probiotics have recently attracted extensive attention in aquaculture. However, the use of terrestrial bacterial species as probiotics for aquaculture has had limited success, as bacterial strain characteristics are dependent upon the environment in which they thrive. Therefore, isolating potential probiotic bacteria from the marine environment in which they grow optimally is a better approach. Bacteria that have been used successfully as probiotics belong to the genus Vibrio and Bacillus, and the species Thalassobacter utilis. Most researchers have isolated these probiotic strains from shrimp culture water, or from the intestine of different penaeid species. The use of probiotic bacteria, based on the principle of competitive exclusion, and the use of immunostimulants are two of the most promising preventive methods developed in the fight against diseases during the last few years. It also noticed that probiotic bacteria could produce some digestive enzymes, which might improve the digestion of shrimp, thus enhancing the ability of stress resistance and health of the shrimp. However, the probiotics in aquatic environment remain to be a controversial concept, as there was no authentic evidence / real environment demonstrations on the successful use of probiotics and their mechanisms of action in vivo. The present review highlights the potential sources of probiotics, mechanism of action, diversity of probiotic microbes and challenges of probiotic usage in shrimp aquaculture.",
"title": ""
},
{
"docid": "42c297b74abd95bbe70bb00ddb0aa925",
"text": "IMPASS (Intelligent Mobility Platform with Active Spoke System) is a novel locomotion system concept that utilizes rimless wheels with individually actuated spokes to provide the ability to step over large obstacles like legs, adapt to uneven surfaces like tracks, yet retaining the speed and simplicity of wheels. Since it lacks the complexity of legs and has a large effective (wheel) diameter, this highly adaptive system can move over extreme terrain with ease while maintaining respectable travel speeds. This paper presents the concept, preliminary kinematic analyses and design of an IMPASS based robot with two actuated spoke wheels and an articulated tail. The actuated spoke wheel concept allows multiple modes of motion, which give it the ability to assume a stable stance using three contact points per wheel, walk with static stability with two contact points per wheel, or stride quickly using one contact point per wheel. Straight-line motion and considerations for turning are discussed for the oneand two-point contact schemes followed by the preliminary design and recommendations for future study. Index Terms – IMPASS, rimless wheel, actuated spoke wheel, mobility, locomotion.",
"title": ""
},
{
"docid": "f0532446a19fb2fa28a7a01cddca7e37",
"text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.",
"title": ""
},
{
"docid": "4307dd62177d67881a51efaccd29957d",
"text": "Data mining techniques and information personalization have made significant growth in the past decade. Enormous volume of data is generated every day. Recommender systems can help users to find their specific information in the extensive volume of information. Several techniques have been presented for development of Recommender System (RS). One of these techniques is the Evolutionary Computing (EC), which can optimize and improve RS in the various applications. This study investigates the number of publications, focusing on some aspects such as the recommendation techniques, the evaluation methods and the datasets which are used.",
"title": ""
},
{
"docid": "7dfc8df07a14ff115860a8340ed77d33",
"text": "Distributed Denial of Service (DDoS) attacks based on Network Time Protocol (NTP) amplification, which became prominent in December 2013, have received significant global attention. We chronicle how this attack rapidly rose from obscurity to become the dominant large DDoS vector. Via the lens of five distinct datasets, we characterize the advent and evolution of these attacks. Through a dataset that measures a large fraction of global Internet traffic, we show a three order of magnitude rise in NTP. Using a large darknet, we observe a similar rise in global scanning activity, both malicious and research. We then dissect an active probing dataset, which reveals that the pool of amplifiers totaled 2.2M unique IPs and includes a small number of \"mega amplifiers,\" servers that replied to a single tiny probe packet with gigabytes of data. This dataset also allows us, for the first time, to analyze global DDoS attack victims (including ports attacked) and incidents, where we show 437K unique IPs targeted with at least 3 trillion packets, totaling more than a petabyte. Finally, ISP datasets shed light on the local impact of these attacks. In aggregate, we show the magnitude of this major Internet threat, the community's response, and the effect of that response.",
"title": ""
},
{
"docid": "227a6e820b101073d5621b2f399883a5",
"text": "Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R). Results show that the quality attribute “Language” is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled. KeywordsNon-Functional Requirements, Quality Attributes, Software Effort Estimation, Desharnais Dataset",
"title": ""
},
{
"docid": "87e8b5b75b5e83ebc52579e8bbae04f0",
"text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.",
"title": ""
}
] |
scidocsrr
|
db486cba278133fee4c8c7195b318e8e
|
Efficient Data Structures For Tamper-Evident Logging
|
[
{
"docid": "c6bc52a8fc4e9e99d1c3165934b82352",
"text": "Audit logs are an important part of any secure system, and they need to be carefully designed in order to give a faithful representation of past system activity. This is especially true in the presence of adversaries who might want to tamper with the audit logs. While it is important that auditors can inspect audit logs to assess past system activity, the content of an audit log may contain sensitive information, and should therefore be protected from unauthorized",
"title": ""
}
] |
[
{
"docid": "7008e040a548d1f5e3d2365a1c712907",
"text": "The k-NN graph has played a central role in increasingly popular data-driven techniques for various learning and vision tasks; yet, finding an efficient and effective way to construct k-NN graphs remains a challenge, especially for large-scale high-dimensional data. In this paper, we propose a new approach to construct approximate k-NN graphs with emphasis in: efficiency and accuracy. We hierarchically and randomly divide the data points into subsets and build an exact neighborhood graph over each subset, achieving a base approximate neighborhood graph; we then repeat this process for several times to generate multiple neighborhood graphs, which are combined to yield a more accurate approximate neighborhood graph. Furthermore, we propose a neighborhood propagation scheme to further enhance the accuracy. We show both theoretical and empirical accuracy and efficiency of our approach to k-NN graph construction and demonstrate significant speed-up in dealing with large scale visual data.",
"title": ""
},
{
"docid": "fe6630363491af99b78c232087edceb1",
"text": "We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the posterior distribution of the Q-values induced by any policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for -greedy improves DQN performance on 51 out of 57 games in the Atari suite.",
"title": ""
},
{
"docid": "1f3a3c1d3c452c8d4b69270481b74c56",
"text": "A smart city is a growing phenomenon of the last years with a lot of researches as well as implementation activities. A smart city is an interdisciplinary field that requires a high level of cooperation among experts from different fields and a contribution of the latest technologies in order to achieve the best results in six key areas. The six key areas cover economy, environment, mobility, people, living and governance. Following a system development methodology is in general a necessity for a successful implementation of a system or a project. Smart city projects introduce additionally new challenges. There is a need for cooperation across many fields, from technical or economic through legislation to humanitarian, together with sharing of resources. The traditional Systems Engineering methodologies fail with respect to such challenges. This paper provides an overview of the existing Systems Engineering methodologies and their limitations. A new Hybrid-Agile approach is proposed and its advantages with respect to smart city projects are discussed. However, the approach expects changes in our thinking. Customers (typically municipality or governmental organizations) have to become active and engaged in smart city projects. It is demonstrated that a city cannot be smart without smart government.",
"title": ""
},
{
"docid": "116b5f129e780a99a1d78ec02a1fb092",
"text": "We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.",
"title": ""
},
{
"docid": "34e544af5158850b7119ac4f7c0b7b5e",
"text": "Over the last decade, the surprising fact has emerged that machines can possess therapeutic power. Due to the many healing qualities of touch, one route to such power is through haptic emotional interaction, which requires sophisticated touch sensing and interpretation. We explore the development of touch recognition technologies in the context of a furry artificial lap-pet, with the ultimate goal of creating therapeutic interactions by sensing human emotion through touch. In this work, we build upon a previous design for a new type of fur-based touch sensor. Here, we integrate our fur sensor with a piezoresistive fabric location/pressure sensor, and adapt the combined design to cover a curved creature-like object. We then use this interface to collect synchronized time-series data from the two sensors, and perform machine learning analysis to recognize 9 key affective touch gestures. In a study of 16 participants, our model averages 94% recognition accuracy when trained on individuals, and 86% when applied to the combined set of all participants. The model can also recognize which participant is touching the prototype with 79% accuracy. These results promise a new generation of emotionally intelligent machines, enabled by affective touch gesture recognition.",
"title": ""
},
{
"docid": "dd144f12a70a37160007f2b7f04b4d77",
"text": "This research examines the role of trait empathy in emotional contagion through non-social targets-art objects. Studies 1a and 1b showed that high- (compared to low-) empathy individuals are more likely to infer an artist's emotions based on the emotional valence of the artwork and, as a result, are more likely to experience the respective emotions themselves. Studies 2a and 2b experimentally manipulated artists' emotions via revealing details about their personal life. Study 3 experimentally induced positive vs. negative emotions in individuals who then wrote literary texts. These texts were shown to another sample of participants. High- (compared to low-) empathy participants were more like to accurately identify and take on the emotions ostensibly (Studies 2a and 2b) or actually (Study 3) experienced by the \"artists\". High-empathy individuals' enhanced sensitivity to others' emotions is not restricted to social targets, such as faces, but extends to products of the human mind, such as objects of art.",
"title": ""
},
{
"docid": "775fe381aa59d3491ff50f593be5fafa",
"text": "This chapter elaborates on augmented reality marketing (ARM) as a digital marketing campaign and a strategic trend in tourism and hospitality. The computer assisted augmenting of perception by means of additional interactive information levels in real time is known as augmented reality. Augmented reality marketing is a constructed worldview on a device with blend of reality and added or augmented themes interacting with five sense organs and experiences. The systems and approaches of marketing are integrating with technological applications in almost all sectors of economies and in all phases of a business’s value delivery network. Trends in service sector marketing provide opportunities in generating technology led tourism marketing campaigns. Also, the adoption, relevance and significance of technology in tourism and hospitality value delivery network can hardly be ignored. Many factors are propelling the functionalities of diverse actors in tourism. This paper explores the use of technology at various phases of tourism and hospitality marketing, along with the role of technology in enhancing consumer experience and value addition. It further supports the view that technology is aiding in faster diffusion of tourism products, relates destinations or attractions and thus benefiting the entire society. The augmented reality in marketing can create effective and enjoyable interactive experience by engaging the customer through a rich and rewarding experience of virtually plus reality. Such a tool has real potential in marketing in tourism and hospitality sector. Thus, this study discusses the ARM as a promising trend in tourism and hospitality and how this will meet future needs of tourism and hospitality products or offerings. The Augmented Reality Marketing: A Merger of Marketing and Technology in Tourism",
"title": ""
},
{
"docid": "94a64f143c19f2815f101eb0c4dc304f",
"text": "Information technology can improve the quality, efficiency, and cost of healthcare. In this survey, we examine the privacy requirements of mobile computing technologies that have the potential to transform healthcare. Such mHealth technology enables physicians to remotely monitor patients' health and enables individuals to manage their own health more easily. Despite these advantages, privacy is essential for any personal monitoring technology. Through an extensive survey of the literature, we develop a conceptual privacy framework for mHealth, itemize the privacy properties needed in mHealth systems, and discuss the technologies that could support privacy-sensitive mHealth systems. We end with a list of open research questions.",
"title": ""
},
{
"docid": "4cf3c8da29395fb8196886dd72072150",
"text": "The force controller of series elastic actuator (SEA) is an important issue, because it determines the output performance of SEA (the bandwidth and the resonant peak). However, the existing force controllers of SEA attach little importance to the determinants of the performance. This paper develops four different kinds of force controllers and selects the optimal controller by comparing their capabilities. The comparison demonstrates that the parameters of dynamical models are key factors affecting the output performance. Finally, the experiments reveal that the equivalent mass and the damping coefficient of SEA are determinants. The conclusion can provide theoretical implications to SEA design.",
"title": ""
},
{
"docid": "eda40814ecaecbe5d15ccba49f8a0d43",
"text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem",
"title": ""
},
{
"docid": "9c510d7ddeb964c5d762d63d9e284f44",
"text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "349d1dcd3e8d415c7fc6009586c9f62d",
"text": "The branch-line coupler may be redesigned for crossover application. The bandwidth of such a coupler can be extended by suitably incorporating additional sections into the composite design. Laboratory tests on microstrip prototypes have shown the return loss and isolation of the three- and four-section couplers to be better than 20 dB over bandwidths of 22% and 33%, respectively. The insertion losses and group delays vary by less than ±0.05 dB and ±1 ns, respectively, for both prototypes.",
"title": ""
},
{
"docid": "ab3fb8980fa8d88e348f431da3d21ed4",
"text": "PIECE (Plant Intron Exon Comparison and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in plant genomes. Recently, we updated PIECE to a new version, PIECE 2.0 (http://probes.pw.usda.gov/piece or http://aegilops.wheat.ucdavis.edu/piece). PIECE 2.0 contains annotated genes from 49 sequenced plant species as compared to 25 species in the previous version. In the current version, we also added several new features: (i) a new viewer was developed to show phylogenetic trees displayed along with the structure of individual genes; (ii) genes in the phylogenetic tree can now be also grouped according to KOG (The annotation of Eukaryotic Orthologous Groups) and KO (KEGG Orthology) in addition to Pfam domains; (iii) information on intronless genes are now included in the database; (iv) a statistical summary of global gene structure information for each species and its comparison with other species was added; and (v) an improved GSDraw tool was implemented in the web server to enhance the analysis and display of gene structure. The updated PIECE 2.0 database will be a valuable resource for the plant research community for the study of gene structure and evolution.",
"title": ""
},
{
"docid": "d318fa3ea2d612db3ba3b7dd56e40906",
"text": "Super-resolution microscopy (SRM) is becoming increasingly important to study nanoscale biological structures. Two most widely used devices for SRM are super-resolution fluorescence microscopy (SRFM) and electron microscopy (EM). For biological living samples, however, SRFM is not preferred since it requires exogenous agents and EM is not preferred since vacuum is required for sample preparation. To overcome these limitations of EM and SFRM, we present a simulation study of super-resolution photoacoustic microscopy (SR-PAM). To break the diffraction limit of light, we investigated a sub-10 nm near-field localization by focusing femtosecond laser pulses under the plasmonic nanoaperture. Using this near-field localization as a light source, we numerically studied the feasibility of the SR-PAM with a k-wave simulation toolbox in MATLAB. In this photoacoustic simulation, we successfully confirmed that the SR-PAM could be a potential method to resolve and image nanoscale structures.",
"title": ""
},
{
"docid": "d60e344c8bfb4422c947ddf22e9837b5",
"text": "INTRODUCTION\nPrevious studies evaluated the perception of laypersons to symmetric alteration of anterior dental esthetics. However, no studies have evaluated the perception of asymmetric esthetic alterations. This investigation will determine whether asymmetric and symmetric anterior dental discrepancies are detectable by dental professionals and laypersons.\n\n\nMETHODS\nSeven images of women's smiles were intentionally altered with a software-imaging program. The alterations involved crown length, crown width, midline diastema, papilla height, and gingiva-to-lip relationship of the maxillary anterior teeth. These altered images were rated by groups of general dentists, orthodontists, and laypersons using a visual analog scale. Statistical analysis of the responses resulted in the establishment of threshold levels of attractiveness for each group.\n\n\nRESULTS\nOrthodontists were more critical than dentists and laypeople when evaluating asymmetric crown length discrepancies. All 3 groups could identify a unilateral crown width discrepancy of 2.0 mm. A small midline diastema was not rated as unattractive by any group. Unilateral reduction of papillary height was generally rated less attractive than bilateral alteration. Orthodontists and laypeople rated a 3-mm distance from gingiva to lip as unattractive.\n\n\nCONCLUSIONS\nAsymmetric alterations make teeth more unattractive to not only dental professionals but also the lay public.",
"title": ""
},
{
"docid": "26029eb824fc5ad409f53b15bfa0dc15",
"text": "Detecting contradicting statements is a fundamental and challenging natural language processing and machine learning task, with numerous applications in information extraction and retrieval. For instance, contradictions need to be recognized by question answering systems or multi-document summarization systems. In terms of machine learning, it requires the ability, through supervised learning, to accurately estimate and capture the subtle differences between contradictions and for instance, paraphrases. In terms of natural language processing, it demands a pipeline approach with distinct phases in order to extract as much knowledge as possible from sentences. Previous state-of-the-art systems rely often on semantics and alignment relations. In this work, I move away from the commonly setup used in this domain, and address the problem of detecting contradictions as a classification task. I argue that for such classification, one can heavily rely on features based on those used for detecting paraphrases and recognizing textual entailment, alongside with numeric and string based features. This M.Sc. dissertation provides a system capable of detecting contradictions from a pair of affirmations published across newspapers with both a F1-score and Accuracy of 71%. Furthermore, this M.Sc. dissertation provides an assessment of what are the most informative features for detecting contradictions and paraphrases and infer if exists a correlation between contradiction detection and paraphrase identification.",
"title": ""
},
{
"docid": "b347cea48fea5341737e315535ea57e5",
"text": "1 EXTENDED ABSTRACT Real world interactions are full of coordination problems [2, 3, 8, 14, 15] and thus constructing agents that can solve them is an important problem for artificial intelligence research. One of the simplest, most heavily studied coordination problems is the matrixform, two-player Stag Hunt. In the Stag Hunt, each player makes a choice between a risky action (hunt the stag) and a safe action (forage for mushrooms). Foraging for mushrooms always yields a safe payoff while hunting yields a high payoff if the other player also hunts but a very low payoff if one shows up to hunt alone. This game has two important Nash equilibria: either both players show up to hunt (this is called the payoff dominant equilibrium) or both players stay home and forage (this is called the risk-dominant equilibrium [7]). In the Stag Hunt, when the payoff to hunting alone is sufficiently low, dyads of learners as well as evolving populations converge to the risk-dominant (safe) equilibrium [6, 8, 10, 11]. The intuition here is that even a slight amount of doubt about whether one’s partner will show up causes an agent to choose the safe action. This in turn causes partners to be less likely to hunt in the future and the system trends to the inefficient equilibrium. We are interested in the problem of agent design: our task is to construct an agent that will go into an initially poorly understood environment and make decisions. Our agent must learn from its experiences to update its policy and maximize some scalar reward. However, there will also be other agents which we do not control. These agents will also learn from their experiences. We ask: if the environment has Stag Hunt-like properties, can we make changes to our agent’s learning to improve its outcomes? We focus on reinforcement learning (RL), however, many of our results should generalize to other learning algorithms.",
"title": ""
},
{
"docid": "9ecd46e90ccd1db7daef14dd63fea8ee",
"text": "HISTORY AND EXAMINATION — A 13-year-old Caucasian boy (BMI 26.4 kg/m) presented with 3 weeks’ history of polyuria, polydipsia, and weight loss. His serum glucose (26.8 mmol/l), HbA1c (9.4%, normal 3.2–5.5) and fructosamine (628 mol/l, normal 205–285) levels were highly elevated (Fig. 1), and urinalysis showed glucosuria ( ) and ketonuria ( ) . He was HLA-DRB1* 0101,*0901, DRB4*01, DQA1*0101,03, and DQB1*0303,0501. Plasma Cpeptide, determined at a blood glucose of 17.0 mmol/l, was low (0.18 nmol/l). His previous history was unremarkable, and he did not take any medication. The patient received standard treatment with insulin, fluid, and electrolyte replacement and diabetes education. After an uneventful clinical course he was discharged on multiple-injection insulin therapy (total 0.9 units kg 1 day ) after 10 days. Subsequently, insulin doses were gradually reduced to 0.3 units kg 1 day , and insulin treatment was completely stopped after 11 months. Without further treatment, HbA1c and fasting glucose levels remained normal throughout the entire follow-up of currently 4.5 years. During oral glucose tolerance testing performed 48 months after diagnosis, he had normal fasting and 2-h levels of glucose (3.7 and 5.6 mmol/l, respectively), insulin (60.5 and 217.9 pmol/l, respectively), and C-peptide (0.36 and 0.99 nmol/l, respectively). His insulin sensitivity, as determined by insulin sensitivity index (composite) and homeostasis model assessment, was normal, and BMI remained unchanged. Serum autoantibodies to GAD65, insulin autoantibody-2, insulin, and islet cell antibodies were initially positive but showed a progressive decline or loss during follow-up. INVESTIGATION — T-cell antigen recognition and cytokine profiles were studied using a library of 21 preproinsulin (PPI) peptides (2). In the patient’s peripheral blood mononuclear cells (PBMCs), a high cumulative interleukin (IL)-10) secretion (201 pg/ml) was observed in response to PPI peptides, with predominant recognition of PPI44–60 and PPI49–65, while interferon (IFN)secretion was undetectable. In contrast, in PBMCs from a cohort of 12 type 1 diabetic patients without long-term remission (2), there was a dominant IFNresponse but low IL-10 secretion to PPI. Analysis of CD4 T–helper cell subsets revealed that IL-10 secretion was mostly attributable to the patient’s naı̈ve/recently activated CD45RA cells, while a strong IFNresponse was observed in CD45RA cells. CD45RA T-cells have been associated with regulatory T-cell function in diabetes, potentially capable of suppressing",
"title": ""
},
{
"docid": "9dadd96558791417495a5e1afa031851",
"text": "INTRODUCTION\nLittle information is available on malnutrition-related factors among school-aged children ≥5 years in Ethiopia. This study describes the prevalence of stunting and thinness and their related factors in Libo Kemkem and Fogera, Amhara Regional State and assesses differences between urban and rural areas.\n\n\nMETHODS\nIn this cross-sectional study, anthropometrics and individual and household characteristics data were collected from 886 children. Height-for-age z-score for stunting and body-mass-index-for-age z-score for thinness were computed. Dietary data were collected through a 24-hour recall. Bivariate and backward stepwise multivariable statistical methods were employed to assess malnutrition-associated factors in rural and urban communities.\n\n\nRESULTS\nThe prevalence of stunting among school-aged children was 42.7% in rural areas and 29.2% in urban areas, while the corresponding figures for thinness were 21.6% and 20.8%. Age differences were significant in both strata. In the rural setting, fever in the previous 2 weeks (OR: 1.62; 95% CI: 1.23-2.32), consumption of food from animal sources (OR: 0.51; 95% CI: 0.29-0.91) and consumption of the family's own cattle products (OR: 0.50; 95% CI: 0.27-0.93), among others factors were significantly associated with stunting, while in the urban setting, only age (OR: 4.62; 95% CI: 2.09-10.21) and years of schooling of the person in charge of food preparation were significant (OR: 0.88; 95% CI: 0.79-0.97). Thinness was statistically associated with number of children living in the house (OR: 1.28; 95% CI: 1.03-1.60) and family rice cultivation (OR: 0.64; 95% CI: 0.41-0.99) in the rural setting, and with consumption of food from animal sources (OR: 0.26; 95% CI: 0.10-0.67) and literacy of head of household (OR: 0.24; 95% CI: 0.09-0.65) in the urban setting.\n\n\nCONCLUSION\nThe prevalence of stunting was significantly higher in rural areas, whereas no significant differences were observed for thinness. Various factors were associated with one or both types of malnutrition, and varied by type of setting. To effectively tackle malnutrition, nutritional programs should be oriented to local needs.",
"title": ""
},
{
"docid": "eb92ec25cebfc22748b395f0b474333a",
"text": "Differential Evolution (DE) is a simple and efficient optimizer, especially for continuous optimization. For these reasons DE has often been employed for solving various engineering problems. On the other hand, the DE structure has some limitations in the search logic, since it contains too narrow a set of exploration moves. This fact has inspired many computer scientists to improve upon DE by proposing modifications to the original algorithm. This paper presents a survey on DE and its recent advances. A classification, into two macro-groups, of the DE modifications is proposed here: (1) algorithms which integrate additional components within the DE structure, (2) algorithms which employ a modified DE structure. For each macro-group, four algorithms representative of the state-of-the-art in DE, have been selected for an in depth description of their working principles. In order to compare their performance, these eight algorithm have been tested on a set of benchmark problems. Experiments have been repeated for a (relatively) low dimensional case and a (relatively) high dimensional case. The working principles, differences and similarities of these recently proposed DE-based algorithms have also been highlighted throughout the paper. Although within both macro-groups, it is unclear whether there is a superiority of one algorithm with respect to the others, some conclusions can be drawn. At first, in order to improve upon the DE performance a modification which includes some additional and alternative search moves integrating those contained in a standard DE is necessary. These extra moves should assist the DE framework in detecting new promising search directions to be used by DE. Thus, a limited employment of these alternative moves appears to be the best option in successfully assisting DE. The successful extra moves are obtained in two ways: an increase in the exploitative pressure and the introduction of some randomization. This randomization should not be excessive though, since it would jeopardize the search. A proper increase in the randomization is crucial for obtaining significant improvements in the DE functioning. Numerical results show that, among the algorithms considered in this study, the most efficient additional components in a DE framework appear to be the population size reduction and the scale factor local search. Regarding the modified DE structures, the global and local neighborhood search and self-adaptive control parameter scheme, recently proposed in literature, seem to be the most promising modifications.",
"title": ""
}
] |
scidocsrr
|
08bbea096c19a441f37a03e57ccb612b
|
An alternative view of the mental lexicon
|
[
{
"docid": "144a16894de8fa88cf3130fe3e44c05d",
"text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this foundations of language brain meaning grammar evolution, how can you bargain with the thing that has many benefits for you?",
"title": ""
},
{
"docid": "8e8d7b2411fa0b0c19d745ce85fcec11",
"text": "Parallel distributed processing (PDP) architectures demonstrate a potentially radical alternative to the traditional theories of language processing that are based on serial computational models. However, learning complex structural relationships in temporal data presents a serious challenge to PDP systems. For example, automata theory dictates that processing strings from a context-free language (CFL) requires a stack or counter memory device. While some PDP models have been hand-crafted to emulate such a device, it is not clear how a neural network might develop such a device when learning a CFL. This research employs standard backpropagation training techniques for a recurrent neural network (RNN) in the task of learning to predict the next character in a simple deterministic CFL (DCFL). We show that an RNN can learn to recognize the structure of a simple DCFL. We use dynamical systems theory to identify how network states re ̄ ect that structure by building counters in phase space. The work is an empirical investigation which is complementary to theoretical analyses of network capabilities, yet original in its speci ® c con® guration of dynamics involved. The application of dynamical systems theory helps us relate the simulation results to theoretical results, and the learning task enables us to highlight some issues for understanding dynamical systems that process language with counters.",
"title": ""
},
{
"docid": "664cd5bc10a5564b3962458a21292c46",
"text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. In this paper, we review basic aspects of conventional approaches to sentence comprehension and point out some of the difficulties faced by models that take these approaches. We then describe an alternative approach, based on the principles of parallel distributed processing, and show how it offers different answers to basic questions about the nature of the language processing mechanism. We describe an illustrative simulation model that captures the key characteristics of the approach, and illustrate how it can cope with the difficulties faced by conventional models. We describe alternative ways of conceptualising basic aspects of language processing within the framework of this approach, consider how it can address several arguments that might be brought to bear against it, and suggest avenues for future development.",
"title": ""
}
] |
[
{
"docid": "37f2ed531daf16b41eb99f21cc065dbe",
"text": "This paper combines three exploratory data analysis methods, principal component methods, hierarchical clustering and partitioning, to enrich the description of the data. Principal component methods are used as preprocessing step for the clustering in order to denoise the data, transform categorical data in continuous ones or balanced groups of variables. The principal component representation is also used to visualize the hierarchical tree and/or the partition in a 3D-map which allows to better understand the data. The proposed methodology is available in the HCPC (Hierarchical Clustering on Principal Components) function of the FactoMineR package.",
"title": ""
},
{
"docid": "ab2f3971a6b4f2f22e911b2367d1ef9e",
"text": "Many multimedia systems stream real-time visual data continuously for a wide variety of applications. These systems can produce vast amounts of data, but few studies take advantage of the versatile and real-time data. This paper presents a novel model based on the Convolutional Neural Networks (CNNs) to handle such imbalanced and heterogeneous data and successfully identifies the semantic concepts in these multimedia systems. The proposed model can discover the semantic concepts from the data with a skewed distribution using a dynamic sampling technique. The paper also presents a system that can retrieve real-time visual data from heterogeneous cameras, and the run-time environment allows the analysis programs to process the data from thousands of cameras simultaneously. The evaluation results in comparison with several state-of-the-art methods demonstrate the ability and effectiveness of the proposed model on visual data captured by public network cameras.",
"title": ""
},
{
"docid": "b57006686160241bf118c2c638971764",
"text": "Reproducibility is the hallmark of good science. Maintaining a high degree of transparency in scientific reporting is essential not just for gaining trust and credibility within the scientific community but also for facilitating the development of new ideas. Sharing data and computer code associated with publications is becoming increasingly common, motivated partly in response to data deposition requirements from journals and mandates from funders. Despite this increase in transparency, it is still difficult to reproduce or build upon the findings of most scientific publications without access to a more complete workflow. Version control systems (VCS), which have long been used to maintain code repositories in the software industry, are now finding new applications in science. One such open source VCS, Git, provides a lightweight yet robust framework that is ideal for managing the full suite of research outputs such as datasets, statistical code, figures, lab notes, and manuscripts. For individual researchers, Git provides a powerful way to track and compare versions, retrace errors, explore new approaches in a structured manner, while maintaining a full audit trail. For larger collaborative efforts, Git and Git hosting services make it possible for everyone to work asynchronously and merge their contributions at any time, all the while maintaining a complete authorship trail. In this paper I provide an overview of Git along with use-cases that highlight how this tool can be leveraged to make science more reproducible and transparent, foster new collaborations, and support novel uses.",
"title": ""
},
{
"docid": "a13114518e3e2303e15bf079508d26aa",
"text": "Machine learning algorithms are optimized to model statistical properties of the training data. If the input data reflects stereotypes and biases of the broader society, then the output of the learning algorithm also captures these stereotypes. In this paper, we initiate the study of gender stereotypes in word embedding, a popular framework to represent text data. As their use becomes increasingly common, applications can inadvertently amplify unwanted stereotypes. We show across multiple datasets that the embeddings contain significant gender stereotypes, especially with regard to professions. We created a novel gender analogy task and combined it with crowdsourcing to systematically quantify the gender bias in a given embedding. We developed an efficient algorithm that reduces gender stereotype using just a handful of training examples while preserving the useful geometric properties of the embedding. We evaluated our algorithm on several metrics. While we focus on male/female stereotypes, our framework may be applicable to other types of embedding biases.",
"title": ""
},
{
"docid": "eb2a89c9308283f871df3d52d1bdc340",
"text": "Vertical NAND flash memory cell array by TCAT (terabit cell array transistor) technology is proposed. Damascened metal gate SONOS type cell in the vertical NAND flash string is realized by a unique dasiagate replacementpsila process. Also, conventional bulk erase operation of the cell is successfully demonstrated. All advantages of TCAT flash is achieved without any sacrifice of bit cost scalability.",
"title": ""
},
{
"docid": "f555a50f629bd9868e1be92ebdcbc154",
"text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.",
"title": ""
},
{
"docid": "383b029f9c10186a163f48c01e1ef857",
"text": "Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.",
"title": ""
},
{
"docid": "7c5d9777de76a895c628e0dc171781da",
"text": "The influence of temporal and spatial variations on the microbial community composition was assessed in the unique coastal mangrove of Sundarbans using parallel 16S rRNA gene pyrosequencing. The total sediment DNA was extracted and subjected to the 16S rRNA gene pyrosequencing, which resulted in 117 Mbp of data from three experimental stations. The taxonomic analysis of the pyrosequencing data was grouped into 24 different phyla. In general, Proteobacteria were the most dominant phyla with predominance of Deltaproteobacteria, Alphaproteobacteria, and Gammaproteobacteria within the sediments. Besides Proteobacteria, there are a number of sequences affiliated to the following major phyla detected in all three stations in both the sampling seasons: Actinobacteria, Bacteroidetes, Planctomycetes, Acidobacteria, Chloroflexi, Cyanobacteria, Nitrospira, and Firmicutes. Further taxonomic analysis revealed abundance of micro-aerophilic and anaerobic microbial population in the surface layers, suggesting anaerobic nature of the sediments in Sundarbans. The results of this study add valuable information about the composition of microbial communities in Sundarbans mangrove and shed light on possible transformations promoted by bacterial communities in the sediments.",
"title": ""
},
{
"docid": "bfc663107f88522f438bd173db2b85ce",
"text": "While much progress has been made in how to encode a text sequence into a sequence of vectors, less attention has been paid to how to aggregate these preceding vectors (outputs of RNN/CNN) into fixed-size encoding vector. Usually, a simple max or average pooling is used, which is a bottom-up and passive way of aggregation and lack of guidance by task information. In this paper, we propose an aggregation mechanism to obtain a fixed-size encoding with a dynamic routing policy. The dynamic routing policy is dynamically deciding that what and how much information need be transferred from each word to the final encoding of the text sequence. Following the work of Capsule Network, we design two dynamic routing policies to aggregate the outputs of RNN/CNN encoding layer into a final encoding vector. Compared to the other aggregation methods, dynamic routing can refine the messages according to the state of final encoding vector. Experimental results on five text classification tasks show that our method outperforms other aggregating models by a significant margin. Related source code is released on our github page1.",
"title": ""
},
{
"docid": "ebb01a778c668ef7b439875eaa5682ac",
"text": "In this paper, we present a large scale off-line handwritten Chinese character database-HCL2000 which will be made public available for the research community. The database contains 3,755 frequently used simplified Chinesecharacters written by 1,000 different subjects. The writers’ information is incorporated in the database to facilitate testing on grouping writers with different background such as age, occupation, gender, and education etc. We investigate some characteristics of writing styles from different groups of writers. We evaluate HCL2000 database using three different algorithms as a baseline. We decide to publish the database along with this paper and make it free for a research purpose.",
"title": ""
},
{
"docid": "0d41fcc5ea57e42c87b4a3152d50f9d2",
"text": "This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be “embedded” into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem.",
"title": ""
},
{
"docid": "3f45dbb4b3bc0de55a5274170c800406",
"text": "An optimal design procedure, based on Hooke-Jeeves method, applied to a permanent magnet transverse flux motor (PMTFM) is presented in the paper. Different objective functions, as minimum cost, minimum cogging torques and maximum efficiency were considered. The results obtained for a low power sample PMTFM are given and commented.",
"title": ""
},
{
"docid": "880b4ce4c8fd19191cb996aceabdf5a7",
"text": "The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200 million pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale.",
"title": ""
},
{
"docid": "d813c010b5c70b11912ada93f0e3b742",
"text": "The rapid development of technologies introduces smartness to all organisations and communities. The Smart Tourism Destinations (STD) concept emerges from the development of Smart Cities. With technology being embedded on all organisations and entities, destinations will exploit synergies between ubiquitous sensing technology and their social components to support the enrichment of tourist experiences. By applying smartness concept to address travellers’ needs before, during and after their trip, destinations could increase their competitiveness level. This paper aims to take advantage from the development of Smart Cities by conceptualising framework for Smart Tourism Destinations through exploring tourism applications in destination and addressing both opportunities and challenges it possessed.",
"title": ""
},
{
"docid": "f1dbacae0f2b67555616bfc551e5a6ea",
"text": "The oscillating and swinging parts of a target observed by radar cause additional frequency modulation and induce sidebands in the target's Doppler frequency shift (micro-Doppler). This effect provides unique features for classification in radar systems. In this paper, the micro-Doppler spectra and range-Doppler matrices of single bird and bird flocks are obtained by simulations for linear FMCW radar. Obtained range-Doppler matrices are compared for single bird and bird flock under several scenarios and new features are proposed for classification.",
"title": ""
},
{
"docid": "2fcd7e151c658e29cacda5c4f5542142",
"text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.",
"title": ""
},
{
"docid": "f72d72975b1c16ee3d0c0ec1826301e3",
"text": "Motion layer estimation has recently emerged as a promising object tracking method. In this paper, we extend previous research on layer-based tracker by introducing the concept of background occluding layers and explicitly inferring depth ordering of foreground layers. The background occluding layers lie in front of, behind, and in between foreground layers. Each pixel in the background regions belongs to one of these layers and occludes all the foreground layers behind it. Together with the foreground ordering, the complete information necessary for reliably tracking objects through occlusion is included in our representation. An MAP estimation framework is developed to simultaneously update the motion layer parameters, the ordering parameters, and the background occluding layers. Experimental results show that under various conditions with occlusion, including situations with moving objects undergoing complex motions or having complex interactions, our tracking algorithm is able to handle many difficult tracking tasks reliably.",
"title": ""
},
{
"docid": "2a18be6c7ba24fa727406ddd75f80c0e",
"text": "Increasing evidence suggests that Alzheimer's disease pathogenesis is not restricted to the neuronal compartment, but includes strong interactions with immunological mechanisms in the brain. Misfolded and aggregated proteins bind to pattern recognition receptors on microglia and astroglia, and trigger an innate immune response characterised by release of inflammatory mediators, which contribute to disease progression and severity. Genome-wide analysis suggests that several genes that increase the risk for sporadic Alzheimer's disease encode factors that regulate glial clearance of misfolded proteins and the inflammatory reaction. External factors, including systemic inflammation and obesity, are likely to interfere with immunological processes of the brain and further promote disease progression. Modulation of risk factors and targeting of these immune mechanisms could lead to future therapeutic or preventive strategies for Alzheimer's disease.",
"title": ""
},
{
"docid": "9e68f50309814e976abff3f5a5926a57",
"text": "This paper presents a compact and broadband micro strip patch antenna array (BMPAA) with uniform linear array configuration of 4x1 elemnt for 3G applications. The 4×1 BMPAA was designed and developed by integrating new patch sha pe Hybrid E-H shape, L-probe feeding, inverted patch structure with air filled dielectric microstr ip patch antenna (MPA) element. The array was constructed using two dielectric layer arrangement with a thick air-filled substrate sandwiched betwee n a top-loaded dielectric substrate ( RT 5880) with inverting radiating patch and a ground plane . The Lprobe fed inverted hybrid E-H (LIEH) shaped BMPAA a chieves an impedance bandwidth of 17.32% referenced to the center frequency at 2.02 GHz (at VSWR ≤ 1.5), maximum achievable gain of 11.9±1dBi, and 23 dB crosspolarization level.",
"title": ""
},
{
"docid": "0d3403ce2d1613c1ea6b938b3ba9c5e6",
"text": "Extracting a set of generalizable rules that govern the dynamics of complex, high-level interactions between humans based only on observations is a high-level cognitive ability. Mastery of this skill marks a significant milestone in the human developmental process. A key challenge in designing such an ability in autonomous robots is discovering the relationships among discriminatory features. Identifying features in natural scenes that are representative of a particular event or interaction (i.e. »discriminatory features») and then discovering the relationships (e.g., temporal/spatial/spatio-temporal/causal) among those features in the form of generalized rules are non-trivial problems. They often appear as a »chicken-and-egg» dilemma. This paper proposes an end-to-end learning framework to tackle these two problems in the context of learning generalized, high-level rules of human interactions from structured demonstrations. We employed our proposed deep reinforcement learning framework to learn a set of rules that govern a behavioral intervention session between two agents based on observations of several instances of the session. We also tested the accuracy of our framework with human subjects in diverse situations.",
"title": ""
}
] |
scidocsrr
|
9a264d4cde6a0b1fd0df5a4dbab5c50f
|
Adaptive relevance feedback in information retrieval
|
[
{
"docid": "a2fd33f276a336e2a33d84c2a0abc283",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
},
{
"docid": "5fd55cd22aa9fd4df56b212d3d578134",
"text": "Relevance feedback has a history in information retrieval that dates back well over thirty years (c.f. [SL96]). Relevance feedback is typically used for query expansion during short-term modeling of a user’s immediate information need and for user profiling during long-term modeling of a user’s persistent interests and preferences. Traditional relevance feedback methods require that users explicitly give feedback by, for example, specifying keywords, selecting and marking documents, or answering questions about their interests. Such relevance feedback methods force users to engage in additional activities beyond their normal searching behavior. Since the cost to the user is high and the benefits are not always apparent, it can be difficult to collect the necessary data and the effectiveness of explicit techniques can be limited.",
"title": ""
}
] |
[
{
"docid": "b17889bc5f4d4fb498a9b9c5d45bd560",
"text": "Photonic components are superior to electronic ones in terms of operational bandwidth, but the diffraction limit of light poses a significant challenge to the miniaturization and high-density integration of optical circuits. The main approach to circumvent this problem is to exploit the hybrid nature of surface plasmon polaritons (SPPs), which are light waves coupled to free electron oscillations in a metal that can be laterally confined below the diffraction limit using subwavelength metal structures. However, the simultaneous realization of strong confinement and a propagation loss sufficiently low for practical applications has long been out of reach. Channel SPP modes—channel plasmon polaritons (CPPs)—are electromagnetic waves that are bound to and propagate along the bottom of V-shaped grooves milled in a metal film. They are expected to exhibit useful subwavelength confinement, relatively low propagation loss, single-mode operation and efficient transmission around sharp bends. Our previous experiments showed that CPPs do exist and that they propagate over tens of micrometres along straight subwavelength grooves. Here we report the design, fabrication and characterization of CPP-based subwavelength waveguide components operating at telecom wavelengths: Y-splitters, Mach–Zehnder interferometers and waveguide–ring resonators. We demonstrate that CPP guides can indeed be used for large-angle bending and splitting of radiation, thereby enabling the realization of ultracompact plasmonic components and paving the way for a new class of integrated optical circuits.",
"title": ""
},
{
"docid": "c49ffcb45cc0a7377d9cbdcf6dc07057",
"text": "Dermoscopy is an in vivo method for the early diagnosis of malignant melanoma and the differential diagnosis of pigmented lesions of the skin. It has been shown to increase diagnostic accuracy over clinical visual inspection in the hands of experienced physicians. This article is a review of the principles of dermoscopy as well as recent technological developments.",
"title": ""
},
{
"docid": "49218bcad26390909d0309bc7e04c780",
"text": "Credit card fraud costs consumers and the financial industry billions of dollars annually. However, there is a dearth of published literature on credit card fraud detection. In this study we employed transaction aggregation strategy to detect credit card fraud. We aggregated transactions to capture consumer buying behavior prior to each transaction and used these aggregations for model estimation to identify fraudulent transactions. We use real-life data of credit card transactions from an international credit card operation for transaction aggregation and model estimation. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "47f90d0ea5d60f5cb2d6409e7d497f68",
"text": "Five new chlorophenolic glucosides, curculigine E (1), curculigine F (2), curculigine G (3), curculigine H (5), curculigine I (6) and one new phenolic glycoside, orcinoside H (4), together with eight known phenolic glycosides (7-14) were isolated from the Curculigo orchioides Gaertn. Their structures were established by spectroscopic techniques (IR, UV, MS, 1D and 2D NMR). The isolated phenolic glycosides were evaluated for antiosteoporotic activity against MC3T3-E1 cell line using MTT assays. Compounds 1, 2, 3, and 5 showed moderate antiosteoporotic activity with the proliferation rate of 10.1-14.1%.",
"title": ""
},
{
"docid": "0f082b08e6d47e00d9bfab1c28b062e9",
"text": "In this work, we present Interaction+, a tool that enhances the interactive capability of existing web-based visualizations. Different from the toolkits for authoring interactions during the visualization construction, Interaction+ takes existing visualizations as input, analyzes the visual objects, and provides users with a suite of interactions to facilitate the visual exploration, including selection, aggregation, arrangement, comparison, filtering, and annotation. Without accessing the underlying data or process how the visualization is constructed, Interaction+ is application-independent and can be employed in various visualizations on the web. We demonstrate its usage in two scenarios and evaluate its effectiveness with a qualitative user study.",
"title": ""
},
{
"docid": "e442b7944062f6201e779aa1e7d6c247",
"text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.",
"title": ""
},
{
"docid": "712a4bdb5b285f3ef52218096ec3a4bf",
"text": "We describe the relations between active maintenance of the hand at various positions in a two-dimensional space and the frequency of single cell discharge in motor cortex (n = 185) and area 5 (n = 128) of the rhesus monkey. The steady-state discharge rate of 124/185 (67%) motor cortical and 105/128 (82%) area 5 cells varied with the position in which the hand was held in space (“static spatial effect”). The higher prevalence of this effect in area 5 was statistically significant. In both structures, static effects were observed at similar frequencies for cells that possessed as well as for those that lacked passive driving from the limb. The results obtained by a quantitative analysis were similar for neurons of the two cortical areas studied. It was found that of the neurons with a static effect, the steady-state discharge rate of 78/124 (63%) motor cortical and 63/105 (60%) area 5 cells was a linear function of the position of the hand across the two-dimensional space, so that the neuronal “response surface” was adequately described by a plane (R2 ≥ 0.7, p < 0.05, F-test in analysis of variance). The preferred orientations of these response planes differed for different cells. These results indicate that individual cells in these areas do not relate uniquely a particular position of the hand in space. Instead, they seem to encode spatial gradients at certain orientations. A unique relation to position in space could be signalled by the whole population of these neurons, considered as an ensemble. This remains to be elucidated. Finally, the similarity of the quantitative relations observed in motor cortex and area 5 suggests that these structures may process spatial information in a similar way.",
"title": ""
},
{
"docid": "4d1d343f03f6a1fae94f630a64e10081",
"text": "This paper describes our system participating in the aspect-based sentiment analysis task of Semeval 2014. The goal was to identify the aspects of given target entities and the sentiment expressed towards each aspect. We firstly introduce a system based on supervised machine learning, which is strictly constrained and uses the training data as the only source of information. This system is then extended by unsupervised methods for latent semantics discovery (LDA and semantic spaces) as well as the approach based on sentiment vocabularies. The evaluation was done on two domains, restaurants and laptops. We show that our approach leads to very promising results.",
"title": ""
},
{
"docid": "44b7ed6c8297b6f269c8b872b0fd6266",
"text": "vii",
"title": ""
},
{
"docid": "51cbdd7c0e54949e2a3d238eda721b3d",
"text": "This paper addresses the task scheduling and path planning problem for a team of cooperating vehicles performing autonomous deliveries in urban environments. The cooperating team comprises two vehicles with complementary capabilities, a truck restricted to travel along a street network, and a quadrotor micro-aerial vehicle of capacity one that can be deployed from the truck to perform deliveries. The problem is formulated as an optimal path planning problem on a graph and the goal is to find the shortest cooperative route enabling the quadrotor to deliver items at all requested locations. The problem is shown to be NP-hard. A solution is then proposed using a novel reduction to the Generalized Traveling Salesman Problem, for which well-established heuristic solvers exist. The heterogeneous delivery problem contains as a special case the problem of scheduling deliveries from multiple static warehouses. We propose two additional algorithms, based on enumeration and a reduction to the traveling salesman problem, for this special case. Simulation results compare the performance of the presented algorithms and demonstrate examples of delivery route computations over real urban street maps.",
"title": ""
},
{
"docid": "f9a3f69cf26b279fa8600fd2ebbc3426",
"text": "We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: \"Are there any apples in the fridge?\" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question. Popular reinforcement learning approaches with a single controller perform poorly on IQA owing to the large and diverse state space. We propose the Hierarchical Interactive Memory Network (HIMN), consisting of a factorized set of controllers, allowing the system to operate at multiple levels of temporal abstraction. To evaluate HIMN, we introduce IQUAD V1, a new dataset built upon AI2-THOR [35], a simulated photo-realistic environment of configurable indoor scenes with interactive objects. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration. Our experiments show that our proposed model outperforms popular single controller based methods on IQUAD V1. For sample questions and results, please view our video: https://youtu.be/pXd3C-1jr98.",
"title": ""
},
{
"docid": "c15121597ef5f07e389b2c79c7abfb87",
"text": "Recently, there has been a growing interest in biologically inspired biped locomotion control with Central Pattern Generator (CPG). However, few experimental attempts on real hardware 3D humanoid robots have yet been made. Our goal in this paper is to present our achievement of 3D biped locomotion using a neural oscillator applied to a humanoid robot, QRIO. We employ reduced number of neural oscillators as the CPG model, along with a task space Cartesian coordinate system and utilizing entrainment property to establish stable walking gait. We verify robustness against lateral perturbation, through numerical simulation of stepping motion in place along the lateral plane. We then implemented it on the QRIO. It could successfully cope with unknown 3mm bump by autonomously adjusting its stepping period. Sagittal motion produced by a neural oscillator is introduced, and then overlapped with the lateral motion generator in realizing 3D biped locomotion on a QRIO humanoid robot.",
"title": ""
},
{
"docid": "d6e5f280fc760c2791b80fecd8da2447",
"text": "The increased importance of lowering power in memory design has produced a trend of operating memories at lower supply voltages. Recent explorations into sub-threshold operation for logic show that minimum energy operation is possible in this region. These two trends suggest a meeting point for energy-constrained applications in which SRAM operates at sub-threshold voltages compatible with the logic. Since sub-threshold voltages leave less room for large static noise margin (SNM), a thorough understanding of the impact of various design decisions and other parameters becomes critical. This paper analyzes SNM for sub-threshold bitcells in a 65-nm process for its dependency on sizing, VDD, temperature, and local and global threshold variation. The VT variation has the greatest impact on SNM, so we provide a model that allows estimation of the SNM along the worst-case tail of the distribution",
"title": ""
},
{
"docid": "1f7bd85c5b28f97565d8b38781e875ab",
"text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.",
"title": ""
},
{
"docid": "1f44c8d792b961649903eb1ab2612f3c",
"text": "Teeth segmentation is an important step in human identification and Content Based Image Retrieval (CBIR) systems. This paper proposes a new approach for teeth segmentation using morphological operations and watershed algorithm. In Cone Beam Computer Tomography (CBCT) and Multi Slice Computer Tomography (MSCT) each tooth is an elliptic shape region that cannot be separated only by considering their pixels' intensity values. For segmenting a tooth from the image, some enhancement is necessary. We use morphological operators such as image filling and image opening to enhance the image. In the proposed algorithm, a Maximum Intensity Projection (MIP) mask is used to separate teeth regions from black and bony areas. Then each tooth is separated using the watershed algorithm. Anatomical constraints are used to overcome the over segmentation problem in watershed method. The results show a high accuracy for the proposed algorithm in segmenting teeth. Proposed method decreases time consuming by considering only one image of CBCT and MSCT for segmenting teeth instead of using all slices.",
"title": ""
},
{
"docid": "4fd421bbe92b40e85ffd66cf0084b1b8",
"text": "Real-time performance of adaptive digital signal processing algorithms is required in many applications but it often means a high computational load for many conventional processors. In this paper, we present a configurable hardware architecture for adaptive processing of noisy signals for target detection based on Constant False Alarm Rate (CFAR) algorithms. The architecture has been designed to deal with parallel/pipeline processing and to be configured for three version of CFAR algorithms, the Cell-Average, the Max and the Min CFAR. The proposed architecture has been implemented on a Field Programmable Gate Array (FPGA) device providing good performance improvements over software implementations. FPGA implementation results are presented and discussed.",
"title": ""
},
{
"docid": "dd2bf018a4edfebc754881cbacb6f705",
"text": "In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of ${k}$ -nearest neighbor classification performance.",
"title": ""
},
{
"docid": "a30d9dbac3f0d988fd15884cda3ecf93",
"text": "In this review article, the authors have summarized the published literature supporting the value of video game use on the following topics: improvement of cognitive functioning in older individuals, potential reasons for the positive effects of video game use in older age, and psychological factors related to using video games in older age. It is important for geriatric researchers and practitioners to identify approaches and interventions that minimize the negative effects of the various changes that occur within the aging body. Generally speaking, biological aging results in a decline of both physical and cognitive functioning.1–3 However, a growing body of literature indicates that taking part in physically and/or mentally stimulating activities may contribute to the maintenance of cognitive abilities and even lead to acquiring cognitive gains.4 It is important to identify ways to induce cognitive improvements in older age, especially considering that the population of the United States (U.S.) is aging rapidly, with the number of people age 65 and older expected to increase to almost 84 million by 2050.5 This suggests that there will likely be a rapid escalation in the number of older individuals living with age-related cognitive impairment. It is currently estimated that there are 5.5 million people in the U.S. who have been diagnosed with Alzheimer’s disease,6 which is one of the most common forms of dementia.7 Thus, research aimed at helping older adults maintain good cognitive functioning is highly needed. Due to space limitations, this article is not meant to include all of the available research in this area; it contains mainly supporting evidence on the effects of video game use among older adults. Some opposing evidence is briefly mentioned when covering whether the skills acquired during video game training transfer to non-practiced tasks (which is a particularly controversial topic with ample mixed evidence).",
"title": ""
},
{
"docid": "f4535d47191caaa1e830e5d8fae6e1ba",
"text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.",
"title": ""
},
{
"docid": "4958f4a85b531a2d5a846d1f6eb1a5a3",
"text": "The n-channel lateral double-diffused metal-oxide- semiconductor (nLDMOS) devices in high-voltage (HV) technologies are known to have poor electrostatic discharge (ESD) robustness. To improve the ESD robustness of nLDMOS, a co-design method combining a new waffle layout structure and a trigger circuit is proposed to fulfill the body current injection technique in this work. The proposed layout and circuit co-design method on HV nLDMOS has successfully been verified in a 0.5-¿m 16-V bipolar-CMOS-DMOS (BCD) process and a 0.35- ¿m 24-V BCD process without using additional process modification. Experimental results through transmission line pulse measurement and failure analyses have shown that the proposed body current injection technique can significantly improve the ESD robustness of HV nLDMOS.",
"title": ""
}
] |
scidocsrr
|
7a35d5d4b1f92272d892382be1029cd8
|
Blockchain for Configuration Management in IoT
|
[
{
"docid": "64a77ec55d5b0a729206d9af6d5c7094",
"text": "In this paper, we propose an Internet of Things (IoT) virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud. The framework uses the sensor-as-aservice notion to expose IoT cloud's connected objects functional aspects in the form of web services. The framework uses an adapter oriented approach to address the issue of connectivity with various types of sensor nodes. We employ semantic enhanced access polices to ensure that only authorized parties can access the IoT framework services, which result in enhancing overall security of the proposed framework. Furthermore, the use of event-driven service oriented architecture (e-SOA) paradigm assists the framework to leverage the monitoring process by dynamically sensing and responding to different connected objects sensor events. We present our design principles, implementations, and demonstrate the development of IoT application with reasoning capability by using a green school motorcycle (GSMC) case study. Our exploration shows that amalgamation of e-SOA, semantic web technologies and virtualization paves the way to address the connectivity, security and monitoring issues of IoT domain.",
"title": ""
},
{
"docid": "39e6ddd04b7fab23dbbeb18f2696536e",
"text": "Moving IoT components from the cloud onto edge hosts helps in reducing overall network traffic and thus minimizes latency. However, provisioning IoT services on the IoT edge devices presents new challenges regarding system design and maintenance. One possible approach is the use of software-defined IoT components in the form of virtual IoT resources. This, in turn, allows exposing the thing/device layer and the core IoT service layer as collections of micro services that can be distributed to a broad range of hosts.\n This paper presents the idea and evaluation of using virtual resources in combination with a permission-based blockchain for provisioning IoT services on edge hosts.",
"title": ""
},
{
"docid": "fccddd89aa261a9e30041bf2323d2029",
"text": "Cloud computing and Internet of Things (IoT), two very different technologies, are both already part of our life. Their massive adoption and use is expected to increase further, making them important components of the Future Internet. A novel paradigm where Cloud and IoT are merged together is foreseen as disruptive and an enabler of a large number of application scenarios. In this paper we focus our attention on the integration of Cloud and IoT, which we call the CloudIoT paradigm. Many works in literature have surveyed Cloud and IoT separately: their main properties, features, underlying technologies, and open issues. However, to the best of our knowledge, these works lack a detailed analysis of the CloudIoT paradigm. To bridge this gap, in this paper we review the literature about the integration of Cloud and IoT. We start analyzing and discussing the need for integrating them, the challenges deriving from such integration, and how these issues have been tackled in literature. We then describe application scenarios that have been presented in literature, as well as platforms -- both commercial and open source -- and projects implementing the CloudIoT paradigm. Finally, we identify open issues, main challenges and future directions in this promising field.",
"title": ""
}
] |
[
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "4b494016220eb5442642e34c3ed2d720",
"text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.",
"title": ""
},
{
"docid": "24fea8f85c2fac8bd8278a153ab64a90",
"text": "In this paper, we describe an approach for learning planning domain models directly from natural language (NL) descriptions of activity sequences. The modelling problem has been identified as a bottleneck for the widespread exploitation of various technologies in Artificial Intelligence, including automated planners. There have been great advances in modelling assisting and model generation tools, including a wide range of domain model acquisition tools. However, for modelling tools, there is the underlying assumption that the user can formulate the problem using some formal language. And even in the case of the domain model acquisition tools, there is still a requirement to specify input plans in an easily machine readable format. Providing this type of input is impractical for many potential users. This motivates us to generate planning domain models directly from NL descriptions, as this would provide an important step in extending the widespread adoption of planning techniques. We start from NL descriptions of actions and use NL analysis to construct structured representations, from which we construct formal representations of the action sequences. The generated action sequences provide the necessary structured input for inducing a PDDL domain, using domain model acquisition technology. In order to capture a concise planning model, we use an estimate of functional similarity, so sentences that describe similar behaviours are represented by the same planning operator. We validate our approach with a user study, where participants are tasked with describing the activities occurring in several videos. Then our system is used to learn planning domain models using the participants’ NL input. We demonstrate that our approach is effective at learning models on these tasks. Introduction Modelling problems appropriately for use by a computer program has been identified as a key bottleneck in the exploitation of various AI technologies. In Automated Planning, this has inspired a growing body of work that aims to support the modelling process including domain acquisition tools, which learn a formal domain model of a system from some form of input data. There is interest in applying domain model acquisition across a range of research and application areas. For example within the business process community (Hoffmann, Weber, and Kraft 2012) and Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. space applications (Frank et al. 2011). An extended version of the LOCM domain model acquisition system (Cresswell, McCluskey, and West 2009) has also been used to help in the development of a puzzle game (Ersen and Sariel 2015) based on spatio-temporal reasoning. Web Service Composition is another area in which domain model acquisition techniques have been used (Walsh and Littman 2008). These tools vary in the specifics of the input language, such as example action sequences (Cresswell, McCluskey, and West 2009; Cresswell and Gregory 2011), or action sequences and a partial domain model (McCluskey et al. 2009; Richardson 2008); the query system by which they acquire the input data, which is typically static training sets, although there are examples working with an interactive querying system (Walsh and Littman 2008; Mehta, Tadepalli, and Fern 2011); and the target model language, including STRIPS (Cresswell, McCluskey, and West 2009; Cresswell and Gregory 2011), probabilistic (Mourão, Petrick, and Steedman 2010), and numeric (Gregory and Lindsay 2016; Hayton et al. 2016). However, in each case the user is left the responsibility of defining a formal representation for the solution. Defining these logical formalisms and applying them consistently requires time and experience in both the target domain and in the representation language, which many potential users will not have. It is therefore important to consider alternative input languages, such as Natural Language (Goldwasser and Roth 2011). Natural Language (NL) input is the most natural way for humans to interact and it is no surprise that there is much interest in using NL as input for computer systems. In day-to-day life, Siri and its competitors are controlled by simple spoken word input, but can activate complex procedures on our phones. In the RoboCup@Home competitions robots are controlled by task descriptions and are automatically translated into a series of simple actions that can be performed on the robot. And NL lessons have been used to learn partial representations of the world dynamics for game-like environments (Goldwasser and Roth 2011). A key aspect of these systems is an underlying language, which the NL input is mapped onto. For example, in the case of RoboCup@Home, an input of ‘go to the living room’ might be mapped onto quite a different representation, using the action name ‘move’ and requiring a set of parameters that break the movement into smaller",
"title": ""
},
{
"docid": "9e3d3783aa566b50a0e56c71703da32b",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "a914d26b2086e20a7452f0634574820d",
"text": "In this paper, we provide a semantic foundation for role-related concepts in enterprise modelling. We use a conceptual modelling framework to provide a well-founded underpinning for these concepts. We review a number of enterprise modelling approaches in light of the concepts described. This allows us to understand the various approaches, to contrast them and to identify problems in the definition and/or usage of these concepts.",
"title": ""
},
{
"docid": "9a13a2baf55676f82457f47d3929a4e7",
"text": "Humans are a cultural species, and the study of human psychology benefits from attention to cultural influences. Cultural psychology's contributions to psychological science can largely be divided according to the two different stages of scientific inquiry. Stage 1 research seeks cultural differences and establishes the boundaries of psychological phenomena. Stage 2 research seeks underlying mechanisms of those cultural differences. The literatures regarding these two distinct stages are reviewed, and various methods for conducting Stage 2 research are discussed. The implications of culture-blind and multicultural psychologies for society and intergroup relations are also discussed.",
"title": ""
},
{
"docid": "e1373b622129b9e9d1f6414a018481d2",
"text": "Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot’s arm.",
"title": ""
},
{
"docid": "02df2dde321bb81220abdcff59418c66",
"text": "Monitoring aquatic debris is of great interest to the ecosystems, marine life, human health, and water transport. This paper presents the design and implementation of SOAR - a vision-based surveillance robot system that integrates an off-the-shelf Android smartphone and a gliding robotic fish for debris monitoring. SOAR features real-time debris detection and coverage-based rotation scheduling algorithms. The image processing algorithms for debris detection are specifically designed to address the unique challenges in aquatic environments. The rotation scheduling algorithm provides effective coverage of sporadic debris arrivals despite camera's limited angular view. Moreover, SOAR is able to dynamically offload computation-intensive processing tasks to the cloud for battery power conservation. We have implemented a SOAR prototype and conducted extensive experimental evaluation. The results show that SOAR can accurately detect debris in the presence of various environment and system dynamics, and the rotation scheduling algorithm enables SOAR to capture debris arrivals with reduced energy consumption.",
"title": ""
},
{
"docid": "6bdfb1bb4afb4a7581ad26dd1f1e1089",
"text": "Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.",
"title": ""
},
{
"docid": "52064068aed0e6fc1a12d05d61d035b4",
"text": "Wireless power transfer systems with multiple transmitters promise advantages of higher transfer efficiencies and focusing effects over single-transmitter systems. From the standard formulation, straightforward maximization of the power transfer efficiency is not trivial. By reformulating the problem, a convex optimization problem emerges, which can be solved efficiently. Further, using Lagrangian duality theory, analytical results are found for the achievable maximum power transfer efficiency and all parameters involved. With these closed-form results, planar and coaxial wireless power transfer setups are investigated.",
"title": ""
},
{
"docid": "f3a8fa7b4c6ac7a6218a0b8aa5a8f4b2",
"text": "Give us 5 minutes and we will show you the best book to read today. This is it, the uncertainty quantification theory implementation and applications that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read.",
"title": ""
},
{
"docid": "863202feb1410b177c6bb10ccc1fa43d",
"text": "Multimedia retrieval plays an indispensable role in big data utilization. Past efforts mainly focused on single-media retrieval. However, the requirements of users are highly flexible, such as retrieving the relevant audio clips with one query of image. So challenges stemming from the “media gap,” which means that representations of different media types are inconsistent, have attracted increasing attention. Cross-media retrieval is designed for the scenarios where the queries and retrieval results are of different media types. As a relatively new research topic, its concepts, methodologies, and benchmarks are still not clear in the literature. To address these issues, we review more than 100 references, give an overview including the concepts, methodologies, major challenges, and open issues, as well as build up the benchmarks, including data sets and experimental results. Researchers can directly adopt the benchmarks to promptly evaluate their proposed methods. This will help them to focus on algorithm design, rather than the time-consuming compared methods and results. It is noted that we have constructed a new data set XMedia, which is the first publicly available data set with up to five media types (text, image, video, audio, and 3-D model). We believe this overview will attract more researchers to focus on cross-media retrieval and be helpful to them.",
"title": ""
},
{
"docid": "caaca962473382e40a08f90240cc88b6",
"text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.",
"title": ""
},
{
"docid": "d214ef50a5c26fb65d8c06ea7db3d07c",
"text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.",
"title": ""
},
{
"docid": "74749bf843ce4373d63631c490e6afb0",
"text": "The selection of a software development life cycle (SDLC) model for a software project is highly dependent upon the characteristics of the software product to be developed. We classified software products according to characteristics that matter for SDLC selection. We surveyed literature to elicit recommendations for SDLC selection. We formalized our findings to present a rule based recommendation system that can be helpful to software developers in selecting the most appropriate SDLC model to be used for the development of a software product. We conducted an initial evaluation of our system. We believe our SDLC recommendation system provides useful hints for selecting an SDLC, and provides a base for validating and refining SDLC recommendation rules.",
"title": ""
},
{
"docid": "0701f4d74179857b736ebe2c7cdb78b7",
"text": "Modern computer networks generate significant volume of behavioural system logs on a daily basis. Such networks comprise many computers with Internet connectivity, and many users who access the Web and utilise Cloud services make use of numerous devices connected to the network on an ad-hoc basis. Measuring the risk of cyber attacks and identifying the most recent modus-operandi of cyber criminals on large computer networks can be difficult due to the wide range of services and applications running within the network, the multiple vulnerabilities associated with each application, the severity associated with each vulnerability, and the ever-changing attack vector of cyber criminals. In this paper we propose a framework to represent these features, enabling real-time network enumeration and traffic analysis to be carried out, in order to produce quantified measures of risk at specific points in time. We validate the approach using data from a University network, with a data collection consisting of 462,787 instances representing threats measured over a 144 hour period. Our analysis can be generalised to a variety of other contexts. © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).",
"title": ""
},
{
"docid": "8a7cf92704d06baee24cb6f2a551094d",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamically scalable and often virtualized resources are provided as a service over the Internet has become a significant issue. The cloud computing refers to a class of systems and applications that employ distributed resources to perform a function in a decentralized manner. Cloud computing is to utilize the computing resources (service nodes) on the network to facilitate the execution of complicated tasks that require large-scale computation. Thus, the selecting nodes for executing a task in the cloud computing must be considered, and to exploit the effectiveness of the resources, they have to be properly selected according to the properties of the task. However, in this study, a two-phase scheduling algorithm under a three-level cloud computing network is advanced. The proposed scheduling algorithm combines OLB (Opportunistic Load Balancing) and LBMM (Load Balance Min-Min) scheduling algorithms that can utilize more better executing efficiency and maintain the load balancing of system.",
"title": ""
},
{
"docid": "99a871a37e8d0580371a868895bed721",
"text": "This paper proposed a behavior-based steering controller for four wheel independent steering vehicle. The proposed controller acts as virtual linkages among each wheel to minimize wheel slip resulted by the misalignment of the orientations of wheels effectively. As there is no trajectory planning needed in the control algorithm, the proposed controller is especially suitable for non-autonomous vehicle which the driving path cannot be pre-determined. Numerical simulations to examine the performance of the proposed controller are implemented. Results show the effectiveness and robustness of the proposed behavior-based controller.",
"title": ""
},
{
"docid": "11a5dcde4ff379be3a898e58daa69a84",
"text": "The commercial re-use of open government data is broadly expected to generate economic value. However, the practice and study of this trend is still in its infancy. In particular, the issue of value creation in the commercial re-use open government data remains largely unknown. This study aims to further understand how open government data is used to develop commercial products and services. Grounded in the comprehensive data obtained from a sample of 500 US firms that use open government data as part of their business model, we propose a taxonomy that encompasses three business model archetypes (enablers, facilitators, and integrators). Furthermore, we discuss the value proposition of each business model archetype, and subsequently present a framework that describes the value created in the context of the open government data ecosystem. Our framework can be used by both scholars and practitioners in the field of open government data to effectively frame the debate of the value created by the commercial re-use of open government data. Simultaneously, our work can be of benefit to entrepreneurs as it provides a systematic overview, as well as practical insights, of the growing use of open government data in the private sector.",
"title": ""
}
] |
scidocsrr
|
74d78b9a39c9f1643fa0ccce7a0fdf83
|
TRESOR-HUNT: attacking CPU-bound encryption
|
[
{
"docid": "14dd650afb3dae58ffb1a798e065825a",
"text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.",
"title": ""
},
{
"docid": "14a5714f38f355fa967f1b3a4789f0f1",
"text": "Disk encryption has become an important security measure for a multitude of clients, including governments, corporations, activists, security-conscious professionals, and privacy-conscious individuals. Unfortunately, recent research has discovered an effective side channel attack against any disk mounted by a running machine [23]. This attack, known as the cold boot attack, is effective against any mounted volume using state-of-the-art disk encryption, is relatively simple to perform for an attacker with even rudimentary technical knowledge and training, and is applicable to exactly the scenario against which disk encryption is primarily supposed to defend: an adversary with physical access.\n While there has been some previous work in defending against this attack [27], the only currently available solution suffers from the twin problems of disabling access to the SSE registers and supporting only a single encrypted volume, hindering its usefulness for such common encryption scenarios as data and swap partitions encrypted with different keys (the swap key being a randomly generated throw-away key). We present Loop-Amnesia, a kernel-based disk encryption mechanism implementing a novel technique to eliminate vulnerability to the cold boot attack. We contribute a novel technique for shielding multiple encryption keys from RAM and a mechanism for storing encryption keys inside the CPU that does not interfere with the use of SSE. We offer theoretical justification of Loop-Amnesia's invulnerability to the attack, verify that our implementation is not vulnerable in practice, and present measurements showing our impact on I/O accesses to the encrypted disk is limited to a slowdown of approximately 2x. Loop-Amnesia is written for x86-64, but our technique is applicable to other register-based architectures. We base our work on loop-AES, a state-of-the-art open source disk encryption package for Linux.",
"title": ""
}
] |
[
{
"docid": "038c4b82654b3de5c6b49644942c77d6",
"text": "Continuous improvement of business processes is a challenging task that requires complex and robust supporting systems. Using advanced analytics methods and emerging technologies--such as business intelligence systems, business activity monitoring, predictive analytics, behavioral pattern recognition, and \"type simulations\"--can help business users continuously improve their processes. However, the high volumes of event data produced by the execution of processes during the business lifetime prevent business users from efficiently accessing timely analytics data. This article presents a technological solution using a big data approach to provide business analysts with visibility on distributed process and business performance. The proposed architecture lets users analyze business performance in highly distributed environments with a short time response. This article is part of a special issue on leveraging big data and business analytics.",
"title": ""
},
{
"docid": "7ea777ccae8984c26317876d804c323c",
"text": "The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated proteins) system was first identified in bacteria and archaea and can degrade exogenous substrates. It was developed as a gene editing technology in 2013. Over the subsequent years, it has received extensive attention owing to its easy manipulation, high efficiency, and wide application in gene mutation and transcriptional regulation in mammals and plants. The process of CRISPR/Cas is optimized constantly and its application has also expanded dramatically. Therefore, CRISPR/Cas is considered a revolutionary technology in plant biology. Here, we introduce the mechanism of the type II CRISPR/Cas called CRISPR/Cas9, update its recent advances in various applications in plants, and discuss its future prospects to provide an argument for its use in the study of medicinal plants.",
"title": ""
},
{
"docid": "7b95b771e6194efb2deee35cfc179040",
"text": "A Bayesian nonparametric model is a Bayesian model on an infinite-dimensional parameter space. The parameter space is typically chosen as the set of all possible solutions for a given learning problem. For example, in a regression problem the parameter space can be the set of continuous functions, and in a density estimation problem the space can consist of all densities. A Bayesian nonparametric model uses only a finite subset of the available parameter dimensions to explain a finite sample of observations, with the set of dimensions chosen depending on the sample, such that the effective complexity of the model (as measured by the number of dimensions used) adapts to the data. Classical adaptive problems, such as nonparametric estimation and model selection, can thus be formulated as Bayesian inference problems. Popular examples of Bayesian nonparametric models include Gaussian process regression, in which the correlation structure is refined with growing sample size, and Dirichlet process mixture models for clustering, which adapt the number of clusters to the complexity of the data. Bayesian nonparametric models have recently been applied to a variety of machine learning problems, including regression, classification, clustering, latent variable modeling, sequential modeling, image segmentation, source separation and grammar induction.",
"title": ""
},
{
"docid": "bbf764205f770481b787e76db5a3b614",
"text": "A∗ is a popular path-finding algorithm, but it can only be applied to those domains where a good heuristic function is known. Inspired by recent methods combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to train a heuristic represented by a DNN and combine it with A∗ . This new algorithm which we call א∗ can be used efficiently in domains where the input to the heuristic could be processed by a neural network. We compare א∗ to N-Step Deep QLearning (DQN Mnih et al. 2013) in a driving simulation with pixel-based input, and demonstrate significantly better performance in this scenario.",
"title": ""
},
{
"docid": "a9346f8d40a8328e963774f2604da874",
"text": "Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs. Keywords-Convolutional Neural Networks, Softmax (key words) __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "c4bc03788ce4273a219809ad059edaf2",
"text": "In nature, many animals are able to jump, upright themselves after landing and jump again. This allows them to move in unstructured and rough terrain. As a further development of our previously presented 7 g jumping robot, we consider various mechanisms enabling it to recover and upright after landing and jump again. After a weighted evaluation of these different solutions, we present a spherical system with a mass of 9.8 g and a diameter of 12 cm that is able to jump, upright itself after landing and jump again. In order to do so autonomously, it has a control unit and sensors to detect its orientation and spring charging state. With its current configuration it can overcome obstacles of 76 cm at a take-off angle of 75°.",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "54e2dfd355e9e082d9a6f8c266c84360",
"text": "The wealth and value of organizations are increasingly based on intellectual capital. Although acquiring talented individuals and investing in employee learning adds value to the organization, reaping the benefits of intellectual capital involves translating the wisdom of employees into reusable and sustained actions. This requires a culture that creates employee commitment, encourages learning, fosters sharing, and involves employees in decision making. An infrastructure to recognize and embed promising and best practices through social networks, evidence-based practice, customization of innovations, and use of information technology results in increased productivity, stronger financial performance, better patient outcomes, and greater employee and customer satisfaction.",
"title": ""
},
{
"docid": "efe70da1a3118e26acf10aa480ad778d",
"text": "Background: Facebook (FB) is becoming an increasingly salient feature in peoples’ lives and has grown into a bastion in our current society with over 1 billion users worldwide –the majority of which are college students. However, recent studies conducted suggest that the use of Facebook may impacts individuals’ well being. Thus, this paper aimed to explore the effects of Facebook usage on adolescents’ emotional states of depression, anxiety, and stress. Method and Material: A cross sectional design was utilized in this investigation. The study population included 76 students enrolled in the Bachelor of Science in Nursing program from a government university in Samar, Philippines. Facebook Intensity Scale (FIS) and the Depression Anxiety and Stress Scale (DASS) were the primary instruments used in this study. Results: Findings indicated correlation coefficients of 0.11 (p=0.336), 0.07 (p=0.536), and 0.10 (p=0.377) between Facebook Intensity Scale (FIS) and Depression, Anxiety, and Stress scales in the DASS. Time spent on FBcorrelated significantly with depression (r=0.233, p=0.041) and anxiety (r=0.259, p=0.023). Similarly, the three emotional states (depression, anxiety, and stress) correlated significantly. Conclusions: Intensity of Facebook use is not directly related to negative emotional states. However, time spent on Facebooking increases depression and anxiety scores. Implications of the findings to the fields of counseling and psychology are discussed.",
"title": ""
},
{
"docid": "684555a1b5eb0370eebee8cbe73a82ff",
"text": "This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.",
"title": ""
},
{
"docid": "0cf5f7521cccd0757be3a50617cf2473",
"text": "In 1997, Moody and Wu presented recurrent reinforcement learning (RRL) as a viable machine learning method within algorithmic trading. Subsequent research has shown a degree of controversy with regards to the benefits of incorporating technical indicators in the recurrent reinforcement learning framework. In 1991, Nison introduced Japanese candlesticks to the global research community as an alternative to employing traditional indicators within the technical analysis of financial time series. The literature accumulated over the past two and a half decades of research contains conflicting results with regards to the utility of using Japanese candlestick patterns to exploit inefficiencies in financial time series. In this paper, we combine features based on Japanese candlesticks with recurrent reinforcement learning to produce a high-frequency algorithmic trading system for the E-mini S&P 500 index futures market. Our empirical study shows a statistically significant increase in both return and Sharpe ratio compared to relevant benchmarks, suggesting the existence of exploitable spatio-temporal structure in Japanese candlestick patterns and the ability of recurrent reinforcement learning to detect and take advantage of this structure in a high-frequency equity index futures trading environment.",
"title": ""
},
{
"docid": "4c711149abc3af05a8e55e52eefddd97",
"text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.",
"title": ""
},
{
"docid": "ea6eecdaed8e76c28071ad1d9c1c39f9",
"text": "When it comes to taking the public transportation, time and patience are of essence. In other words, many people using public transport buses have experienced time loss because of waiting at the bus stops. In this paper, we proposed smart bus tracking system that any passenger with a smart phone or mobile device with the QR (Quick Response) code reader can scan QR codes placed at bus stops to view estimated bus arrival times, buses' current locations, and bus routes on a map. Anyone can access these maps and have the option to sign up to receive free alerts about expected bus arrival times for the interested buses and related routes via SMS and e-mails. We used C4.5 (a statistical classifier) algorithm for the estimation of bus arrival times to minimize the passengers waiting time. GPS (Global Positioning System) and Google Maps are used for navigation and display services, respectively.",
"title": ""
},
{
"docid": "48d2f38037b0cab83ca4d57bf19ba903",
"text": "The term sentiment analysis can be used to refer to many different, but related, problems. Most commonly, it is used to refer to the task of automatically determining the valence or polarity of a piece of text, whether it is positive, negative, or neutral. However, more generally, it refers to determining one’s attitude towards a particular target or topic. Here, attitude can mean an evaluative judgment, such as positive or negative, or an emotional or affectual attitude such as frustration, joy, anger, sadness, excitement, and so on. Note that some authors consider feelings to be the general category that includes attitude, emotions, moods, and other affectual states. In this chapter, we use ‘sentiment analysis’ to refer to the task of automatically determining feelings from text, in other words, automatically determining valence, emotions, and other affectual states from text. Osgood, Suci, and Tannenbaum (1957) showed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active– passive). Evaluativeness is roughly the same dimension as valence (positive–negative). Russell (1980) developed a circumplex model of affect characterized by two primary dimensions: valence and arousal (degree of reactivity to stimulus). Thus, it is not surprising that large amounts of work in sentiment analysis are focused on determining valence. (See survey articles by Pang and Lee (2008), Liu and Zhang (2012), and Liu (2015).) However, there is some work on automatically detecting arousal (Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Kiritchenko, Zhu, & Mohammad, 2014b; Mohammad, Kiritchenko, & Zhu, 2013a) and growing interest in detecting emotions such as anger, frustration, sadness, and optimism in text (Mohammad, 2012; Bellegarda, 2010; Tokuhisa, Inui, & Matsumoto, 2008; Strapparava & Mihalcea, 2007; John, Boucouvalas, & Xu, 2006; Mihalcea & Liu, 2006; Genereux & Evans, 2006; Ma, Prendinger, & Ishizuka, 2005; Holzman & Pottenger, 2003; Boucouvalas, 2002; Zhe & Boucouvalas, 2002). Further, massive amounts of data emanating from social media have led to significant interest in analyzing blog posts, tweets, instant messages, customer reviews, and Facebook posts for both valence (Kiritchenko et al., 2014b; Kiritchenko, Zhu, Cherry, & Mohammad, 2014a; Mohammad et al., 2013a; Aisopos, Papadakis, Tserpes, & Varvarigou, 2012; Bakliwal, Arora, Madhappan, Kapre, Singh, & Varma, 2012; Agarwal, Xie, Vovsha, Rambow, & Passonneau, 2011; Thelwall, Buckley, & Paltoglou, 2011; Brody & Diakopoulos, 2011; Pak & Paroubek, 2010) and emotions (Hasan, Rundensteiner, & Agu, 2014; Mohammad & Kiritchenko, 2014; Mohammad, Zhu, Kiritchenko, & Martin, 2014; Choudhury, Counts, & Gamon, 2012; Mohammad, 2012a; Wang, Chen, Thirunarayan, & Sheth, 2012; Tumasjan, Sprenger, Sandner, & Welpe, 2010b; Kim, Gilbert, Edwards, &",
"title": ""
},
{
"docid": "6d80c1d1435f016b124b2d61ef4437a5",
"text": "Recent high profile developments of autonomous learning thermostats by companies such as Nest Labs and Honeywell have brought to the fore the possibility of ever greater numbers of intelligent devices permeating our homes and working environments into the future. However, the specific learning approaches and methodologies utilised by these devices have never been made public. In fact little information is known as to the specifics of how these devices operate and learn about their environments or the users who use them. This paper proposes a suitable learning architecture for such an intelligent thermostat in the hope that it will benefit further investigation by the research community. Our architecture comprises a number of different learning methods each of which contributes to create a complete autonomous thermostat capable of controlling a HVAC system. A novel state action space formalism is proposed to enable a Reinforcement Learning agent to successfully control the HVAC system by optimising both occupant comfort and energy costs. Our results show that the learning thermostat can achieve cost savings of 10% over a programmable thermostat, whilst maintaining high occupant comfort standards.",
"title": ""
},
{
"docid": "e4546038f0102d0faac18ac96e50793d",
"text": "Ontologies have been increasingly used as a core representation formalism in medical information systems. Diagnosis is one of the highly relevant reasoning problems in this domain. In recent years this problem has captured attention also in the description logics community and various proposals on formalising abductive reasoning problems and their computational support appeared. In this paper, we focus on a practical diagnostic problem from a medical domain – the diagnosis of diabetes mellitus – and we try to formalize it in DL in such a way that the expected diagnoses are abductively derived. Our aim in this work is to analyze abductive reasoning in DL from a practical perspective, considering more complex cases than trivial examples typically considered by the theoryor algorithm-centered literature, and to evaluate the expressivity as well as the particular formulation of the abductive reasoning problem needed to capture medical diagnosis.",
"title": ""
},
{
"docid": "ed0444685c9a629c7d1fda7c4912fd55",
"text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.",
"title": ""
},
{
"docid": "60da71841669948e0a57ba4673693791",
"text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.",
"title": ""
},
{
"docid": "fc2a0f6979c2520cee8f6e75c39790a8",
"text": "In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"title": ""
},
{
"docid": "92d5ebd49670681a5d43ba90731ae013",
"text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.",
"title": ""
}
] |
scidocsrr
|
1679aa7e80195357da386fefef7eb284
|
Reciprocal and Heterogeneous Link Prediction in Social Networks
|
[
{
"docid": "a6347b2f03ba45d233a872cb6f6891a8",
"text": "The data in many disciplines such as social networks, web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this paper, we consider the problem of temporal link prediction: Given link data for time periods 1 through T, can we predict the links in time period T +1? Specifically, we look at bipartite graphs changing over time and consider matrix- and tensor-based methods for predicting links. We present a weight-based method for collapsing multi-year data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem.",
"title": ""
},
{
"docid": "f233f816c84407a4acd694f540bb18a9",
"text": "Link prediction is a key technique in many applications such as recommender systems, where potential links between users and items need to be predicted. A challenge in link prediction is the data sparsity problem. In this paper, we address this problem by jointly considering multiple heterogeneous link prediction tasks such as predicting links between users and different types of items including books, movies and songs, which we refer to as the collective link prediction (CLP) problem. We propose a nonparametric Bayesian framework for solving the CLP problem, which allows knowledge to be adaptively transferred across heterogeneous tasks while taking into account the similarities between tasks. We learn the inter-task similarity automatically. We also introduce link functions for different tasks to correct their biases and skewness of distributions in their link data. We conduct experiments on several real world datasets and demonstrate significant improvements over several existing state-of-the-art methods.",
"title": ""
},
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
}
] |
[
{
"docid": "4e202df9f80488ffb14feb2c40981336",
"text": "UNLABELLED\nBoden et al. suggested syndesmosis fixation was not necessary in distal pronation external rotation (PER) ankle fractures if rigid bimalleolar fracture fixation is achieved and was not necessary with deltoid ligament injury if the fibular fracture is no higher than 4.5 cm of the tibiotalar joint. We asked whether height of the fibular fracture with or without medial stability predicted syndesmotic instability as compared with intraoperative hook testing in these fractures. We reviewed 62 patients (35 male, 27 female) with a mean age of 45.6 years (range, 19-80 years). Using a bone hook applied to the distal fibula with lateral force to the distal fibula in the coronal plane, we fluoroscopically assessed the degree of syndesmosis diastasis in all patients. The mean height of the fibular fracture in patients with a positive hook test was higher than in patients with a negative hook test (54.2 mm; standard deviation [SD], 29.3 versus 34.8 mm; SD, 21.4, respectively). The height of the fibular fracture showed a positive predictive value of 0.93 and a negative predictive value of 0.53 in predicting syndesmotic instability; specificity of the criteria of Boden et al. was high (0.96). However, sensitivity was low (0.39) using the hook test as the gold standard. The criteria of Boden et al. may be helpful in planning, but may have some limitations as a predictor of syndesmotic instability in distal PER ankle fractures.\n\n\nLEVEL OF EVIDENCE\nLevel III, diagnostic study. See Guidelines for Authors for a complete description of levels of evidence.",
"title": ""
},
{
"docid": "b31ebdbd7edc0b30b0529a85fab0b612",
"text": "In this paper, we present RFMS, the real-time flood monitoring system with wireless sensor networks, which is deployed in two volcanic islands Ulleung-do and Dok-do located in the East Sea near to the Korean peninsula and developed for flood monitoring. RFMS measures river and weather conditions through wireless sensor nodes equipped with different sensors. Measured information is employed for early-warning via diverse types of services such as SMS (short message service) and a Web service.",
"title": ""
},
{
"docid": "fab72d1223fa94e918952b8715e90d30",
"text": "A novel wideband crossed dipole loaded with four parasitic elements is investigated in this letter. The printed crossed dipole is incorporated with a pair of vacant quarter rings to feed the antenna. The antenna is backed by a metallic plate to provide an unidirectional radiation pattern with a wide axial-ratio (AR) bandwidth. To verify the proposed design, a prototype is fabricated and measured. The final design with an overall size of $0.46\\ \\lambda_{0}\\times 0.46\\ \\lambda_{0}\\times 0.23\\ \\lambda_{0} (> \\lambda_{0}$ is the free-space wavelength of circularly polarized center frequency) yields a 10-dB impedance bandwidth of approximately 62.7% and a 3-dB AR bandwidth of approximately 47.2%. In addition, the proposed antenna has a stable broadside gain of 7.9 ± 0.5 dBi within passband.",
"title": ""
},
{
"docid": "b113d45660629847afbd7faade1f3a71",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.",
"title": ""
},
{
"docid": "e754c7c7821703ad298d591a3f7a3105",
"text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.",
"title": ""
},
{
"docid": "a1eff890cfc0d1334ebea1d90d152ae5",
"text": "The purpose of this research was to develop understanding about how vendor firms make choice about agile methodologies in software projects and their fit. Two analytical frameworks were developed from extant literature and the findings were compared with real world decisions. Framework 1 showed that the choice of XP for one project was not supported by the guidelines given by the framework. The choices of SCRUM for other two projects, were partially supported. Analysis using the framework 2 showed that except one XP project, all others had sufficient project management support, limited scope for adaptability and had prominence for rules.",
"title": ""
},
{
"docid": "997a1ec16394a20b3a7f2889a583b09d",
"text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.",
"title": ""
},
{
"docid": "ed16247afd56d561aabe8bb8f3e0c6fe",
"text": "By combining a horizontal planar dipole and a vertically oriented folded shorted patch antenna, a new low-profile magneto-electric dipole antenna is presented. The antenna is simply excited by a coaxial feed that works as a balun. A prototype was fabricated and measured. Simulated and measured results agree well. An impedance bandwidth of 45.6% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm SWR}\\leq 1.5$</tex></formula> from 1.86 to 2.96 GHz was achieved. Stable radiation pattern with low cross polarization, low back radiation, and an antenna gain of 8.1 <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\pm$</tex></formula> 0.8 dBi was found over the operating frequencies. The height of the antenna is only <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.169\\lambda$</tex> </formula> (where <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\lambda$</tex> </formula> is the free-space wavelength at the center frequency). In addition, the antenna is dc grounded, which satisfies the requirement of many outdoor antennas.",
"title": ""
},
{
"docid": "353bbc5e68ec1d53b3cd0f7c352ee699",
"text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.",
"title": ""
},
{
"docid": "3ce1ed2dd4bacd96395b84c2b715d361",
"text": "In this paper, we present the design and fabrication of a centimeter-scale propulsion system for a robotic fish. The key to the design is selection of an appropriate actuator and a body frame that is simple and compact. SMA spring actuators are customized to provide the necessary work output for the microrobotic fish. The flexure joints, electrical wiring and attachment pads for SMA actuators are all embedded in a single layer of copper laminated polymer film, sandwiched between two layers of glass fiber. Instead of using individual actuators to rotate each joint, each actuator rotates all the joints to a certain mode shape and undulatory motion is created by a timed sequence of these mode shapes. Subcarangiform swimming mode of minnows has been emulated using five links and four actuators. The size of the four-joint propulsion system is 6 mm wide, 40 mm long with the body frame thickness of 0.25 mm.",
"title": ""
},
{
"docid": "212536baf7f5bd2635046774436e0dbf",
"text": "Mobile devices have already been widely used to access the Web. However, because most available web pages are designed for desktop PC in mind, it is inconvenient to browse these large web pages on a mobile device with a small screen. In this paper, we propose a new browsing convention to facilitate navigation and reading on a small-form-factor device. A web page is organized into a two level hierarchy with a thumbnail representation at the top level for providing a global view and index to a set of sub-pages at the bottom level for detail information. A page adaptation technique is also developed to analyze the structure of an existing web page and split it into small and logically related units that fit into the screen of a mobile device. For a web page not suitable for splitting, auto-positioning or scrolling-by-block is used to assist the browsing as an alterative. Our experimental results show that our proposed browsing convention and developed page adaptation scheme greatly improve the user's browsing experiences on a device with a small display.",
"title": ""
},
{
"docid": "3c84fb80542fd68e58085c0d754701d1",
"text": "of a thesis at the University of Miami. Thesis supervised by Professor Larry Brand. No. of pages in text. (43) Live feeds are utilized in marine fish hatcheries to feed and promote the health of finfish larvae due to their nutritional advantages. The presence of detrimental bacteria in rotifer culture can cause disease outbreaks in larval rearing. Nevertheless, the use of UV application to disinfect seawater is not very effective to eliminate and inactivate all pathogenic agents presented in the raw surface water. To investigate new methods of disinfection, two experiments were conducted at the University of Miami Experimental Hatchery (UMEH) to quantify and test for antibiotic susceptibility for Vibrio spp. and the total coliforms by using plate counting method and treating with two water-soluble antibiotics, Tobramycin and Minocycline, at concentrations of 30 μg/mL, 100 μg/mL and 200 μg/mL. In the first experiment, water samples from Virginia Key Bear Cut, Florida were collected from five locations beginning from surface water, settling tank, sand filter, after a 120 watts UV instrument and after a 80 watts UV instrument respectively. No fecal coliform colonies were observed by plate counting method after UV disinfections, implying a complete inactivation by UV irradiation, but V. vulnificus, V. parahaemolyticus and",
"title": ""
},
{
"docid": "e22378cc4ae64e9c3abbd4b308198fb6",
"text": "Knowledge about the argumentative structure of scientific articles can, amongst other things, be used to improve automatic abstracts. We argue that the argumentative structure of scientific discourse can be automatically detected because reasordng about problems, research tasks and solutions follows predictable patterns. Certain phrases explicitly mark the rhetorical status (communicative function) of sentences with respect to the global argumentative goal. Examples for such meta-diacaurse markers are \"in this paper, we have p r e s e n t e d . . . \" or \"however, their method fails to\". We report on work in progress about recognizing such meta-comments automatically in research articles from two disciplines: computational linguistics and medicine (cardiology). 1 M o t i v a t i o n We are interested in a formal description of the document s t ructure of scientific articles from different disciplines. Such a description could be of practical use for many applications in document management; our specific mot ivat ion for detecting document structure is qual i ty improvement in automatic abstracting. Researchem in the field of automatic abstracting largely agree that it is currently not technically feasible to create automatic abstracts based on full text unders tanding (Sparck Jones 1994). As a result, many researchers have turned to sentence extraction (Kupiec, Pedersen, & Chen 1995; Brandow, Mitze, & Rau 1995; Hovy & Lin 1997). Sentence extraction, which does not involve any deep analysis, has the huge advantage of being robust with respect to individual writing style, discipline and text type (genre). Instead of producing a b s t r a c t , this results produces only extracts: documen t surrogates consisting of a number of sentences selected verbat im from the original text. We consider a concrete document retrieval (DR) scenario in which a researcher wants to select one or more scientific articles from a large scientific database (or even f rom the Internet) for further inspection. The ma in task for the searcher is relevance decision for each paper: she needs to decide whether or not to spend more t ime on a paper (read or skim-read it), depending on how useful it presumably is to her current information needs. Traditional sentence extracts can be used as rough-and-ready relevance indicators for this task, but they are not doing a great job at representing the contents of the original document: searchers often get the wrong idea about what the text is about. Much of this has to do with the fact that extracts are typically incoherent texts, consisting of potential ly unrelated sentences which have been taken out of their context. Crucially, extracts have no handle at revealing the text 's logical and semantic organisation. More sophisticated, user-tailored abstracts could help the searcher make a fast, informed relevance decision by taking factors like the searcher's expertise and current information need into account. If the searcher is dealing with research she knows well, her information needs might be quite concrete: during the process of writing her own paper she might want to find research which supports her own claims, find out if there are contradictory results to hers in the literature, or compare her results to those of researchers using a similar methodology. A different information need arises when she wants to gain an overview of a new research area as an only \"partially informed user\" in this field (Kircz 1991) she will need to find out about specific research goals, the names of the researchers who have contributed the main research ideas in a given time period, along with information of methodology and results in this research field. There are new functions these abstracts could fulfil. In order to make an informed relevance decision, the searcher needs to judge differences and similarities between papers, e.g. how a given paper relates to similar papers with respect to research goals or methodology, so that she can place the research described in a given paper in the larger picture of the field, a function we call navigation between research articles. A similar operation is navigation within a paper, which supports searchers in non-linear reading and allows them to find relevant information faster, e.g. numerical results. We believe that a document surrogate that aims at supporting such functions should characterize research articles in terms of the problems, research tasks and",
"title": ""
},
{
"docid": "e9af5e2bfc36dd709ae6feefc4c38976",
"text": "Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.",
"title": ""
},
{
"docid": "c3cc7e543a9a304a50a4a01aa72807c2",
"text": "Visual illusion can be strengthened or weakened with the addition of extra visual elements. For example, in Poggendorff illusion, with an additional bar added, the illusory skew in the perceived angle can be enlarged or reduced. In this paper, we show that a nontrivial interaction between lateral inhibitory processes in the early visual system (i.e., disinhibition) can explain such enhancement or degradation of the illusory percept. The computational model we derived successfully predicted the perceived angle in a modified Poggendorff illusion task with an extra thick bar. The concept of disinhibition employed in the model is general enough that we expect it can be further extended to account for other classes of geometric illusions.",
"title": ""
},
{
"docid": "928912d2c482f6178f9311d1830c8b54",
"text": "We describe the Networked Physical World system, a ubiquitous computing system that integrates the physical world with the virtual world. The instrumentation of non-electronic devices, such as cans of soda and boxes of detergent, with inexpensive, low functionality computing devices enables the virtual world to sense and identify physical objects. The ability to accurately and definitively identify physical objects without human intervention is paramount to many applications including home automation and supply chain management. However, a system that networks all physical objects requires a careful, deliberate system design to support the huge scale of a quadrillion node network. We have implemented a prototype of the Networked Physical World system and are evaluating its performance and capabilities within a large-scale real-world supply chain application, the Field Trial. We describe the building b lock system components of the Networked Physical World system and present some preliminary results from the Field Trial.",
"title": ""
},
{
"docid": "54ea2e0435e1a6a3554d420dab3b2f54",
"text": "A lack of information security awareness within some parts of society as well as some organisations continues to exist today. Whilst we have emerged from the threats of late 1990s of viruses such as Code Red and Melissa, through to the phishing emails of the mid 2000’s and the financial damage some such as the Nigerian scam caused, we continue to react poorly to new threats such as demanding money via SMS with a promise of death to those who won’t pay. So is this lack of awareness translating into problems within the workforce? There is often a lack of knowledge as to what is an appropriate level of awareness for information security controls across an organisation. This paper presents the development of a theoretical framework and model that combines aspects of information security best practice standards as presented in ISO/IEC 27002 with theories of Situation Awareness. The resultant model is an information security awareness capability model (ISACM). A preliminary survey is being used to develop the Awareness Importance element of the model and will leverage the opinions of information security professionals. A subsequent survey is also being developed to measure the Awareness Capability element of the model. This will present scenarios that test Level 1 situation awareness (perception), Level 2 situation awareness (comprehension) and finally Level 3 situation awareness (projection). Is it time for awareness of information security to now hit the mainstream of society, governments and organisations?",
"title": ""
},
{
"docid": "e3459bb93bb6f7af75a182472bb42b3e",
"text": "We consider the algorithmic problem of selecting a set of target nodes that cause the biggest activation cascade in a network. In case when the activation process obeys the diminishing return property, a simple hill-climbing selection mechanism has been shown to achieve a provably good performance. Here we study models of influence propagation that exhibit critical behavior and where the property of diminishing returns does not hold. We demonstrate that in such systems the structural properties of networks can play a significant role. We focus on networks with two loosely coupled communities and show that the double-critical behavior of activation spreading in such systems has significant implications for the targeting strategies. In particular, we show that simple strategies that work well for homogenous networks can be overly suboptimal and suggest simple modification for improving the performance by taking into account the community structure.",
"title": ""
},
{
"docid": "d8170e82fcfb0da85ad2f3d7bed4161e",
"text": "In this paper, a new task scheduling algorithm called RASA, considering the distribution and scalability characteristics of grid resources, is proposed. The algorithm is built through a comprehensive study and analysis of two well known task scheduling algorithms, Min-min and Max-min. RASA uses the advantages of the both algorithms and covers their disadvantages. To achieve this, RASA firstly estimates the completion time of the tasks on each of the available grid resources and then applies the Max-min and Min-min algorithms, alternatively. In this respect, RASA uses the Min-min strategy to execute small tasks before the large ones and applies the Max-min strategy to avoid delays in the execution of large tasks and to support concurrency in the execution of large and small tasks. Our experimental results of applying RASA on scheduling independent tasks within grid environments demonstrate the applicability of RASA in achieving schedules with comparatively lower makespan.",
"title": ""
},
{
"docid": "f5ce4a13a8d081243151e0b3f0362713",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
}
] |
scidocsrr
|
f7a68af102ce79772d1eecf83156418f
|
New emerging leadership theories and styles
|
[
{
"docid": "a0547eae9a2186d4c6f1b8307317f061",
"text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "854b2bfdef719879a437f2d87519d8e8",
"text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.",
"title": ""
},
{
"docid": "a2fcbb6a8ec4d913b8f39c9bc5dbf609",
"text": "To address present and future leadership needs, a model of authentic leader and follower development is proposed and examined with respect to its relationship to veritable, sustainable follower performance. The developmental processes of leader and follower self-awareness and self-regulation are emphasized. The influence of the leader's and followers' personal histories and trigger events are considered as antecedents of authentic leadership and followership, as well as the reciprocal effects with an inclusive, ethical, caring and strength-based organizational climate. Positive modeling is viewed as a primary means whereby leaders develop authentic followers. Posited outcomes of authentic leader–follower relationships include heightened levels of follower trust in the leader, engagement, workplace well-being and veritable, sustainable performance. Testable propositions and directions for exploring them are presented and discussed. Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
}
] |
[
{
"docid": "d312d2976737edfba3b82594541a7233",
"text": "We present a novel technique to remove spurious ambiguity fr om t ansition systems for dependency parsing. Our technique chooses a canonical sequence of transition opera tions (computation) for a given dependency tree. Our technique can be applied to a large class of bottom-up transi io systems, including for instance Nivre [2004] and Attardi [2006].",
"title": ""
},
{
"docid": "dd45abc886edb854707acde3e675c5f7",
"text": "The connecting of physical units, such as thermostats, medical devices and self-driving vehicles, to the Internet is happening very quickly and will most likely continue to increase exponentially for some time to come. Valid concerns about security, safety and privacy do not appear to be hampering this rapid growth of the so-called Internet of Things (IoT). There have been many popular and technical publications by those in software engineering, cyber security and systems safety describing issues and proposing various “fixes.” In simple terms, they address the “why” and the “what” of IoT security, safety and privacy, but not the “how.” There are many cultural and economic reasons why security and privacy concerns are relegated to lower priorities. Also, when many systems are interconnected, the overall security, safety and privacy of the resulting systems of systems generally have not been fully considered and addressed. In order to arrive at an effective enforcement regime, we will examine the costs of implementing suitable security, safety and privacy and the economic consequences of failing to do so. We evaluated current business, professional and government structures and practices for achieving better IoT security, safety and privacy, and found them lacking. Consequently, we proposed a structure for ensuring that appropriate security, safety and privacy are built into systems from the outset. Within such a structure, enforcement can be achieved by incentives on one hand and penalties on the other. Determining the structures and rules necessary to optimize the mix of penalties and incentives is a major goal of this paper.",
"title": ""
},
{
"docid": "4798cb0bcd147e6a49135b845d7f2624",
"text": "There is an upsurging interest in designing succinct data structures for basic searching problems (see [23] and references therein). The motivation has to be found in the exponential increase of electronic data nowadays available which is even surpassing the significant increase in memory and disk storage capacities of current computers. Space reduction is an attractive issue because it is also intimately related to performance improvements as noted by several authors (e.g. Knuth [15], Bentley [5]). In designing these implicit data structures the goal is to reduce as much as possible the auxiliary information kept together with the input data without introducing a significant slowdown in the final query performance. Yet input data are represented in their entirety thus taking no advantage of possible repetitiveness into them. The importance of those issues is well known to programmers who typically use various tricks to squeeze data as much as possible and still achieve good query performance. Their approaches, though, boil down to heuristics whose effectiveness is witnessed only by experimentation. In this paper, we address the issue of compressing and indexing data by studying it in a theoretical framework. We devise a novel data structure for indexing and searching whose space occupancy is a function of the entropy of the underlying data set. The novelty resides in the careful combination of a compression algorithm, proposed by Burrows and Wheeler [7], with the structural properties of a well known indexing tool, the Suffix Array [17]. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible at no significant slowdown in the query performance. More precisely, its space occupancy is optimal in an information-content sense because a text T [1, u] is stored using O(Hk(T )) + o(1) bits per input symbol, where Hk(T ) is the kth order entropy of T (the bound holds for any fixed k). Given an arbitrary string P [1, p], the opportunistic data structure allows to search for the occ occurrences of P in T requiring O(p+occ log u) time complexity (for any fixed > 0). If data are uncompressible we achieve the best space bound currently known [11]; on compressible data our solution improves the succinct suffix array of [11] and the classical suffix tree and suffix array data structures either in space or in query time complexity or both. It is a belief [27] that some space overhead should be paid to use full-text indices (like suffix trees or suffix arrays) with respect to word-based indices (like inverted lists). The results in this paper show that a full-text index may achieve sublinear space overhead on compressible texts. As an application we devise a variant of the well-known Glimpse tool [18] which achieves sublinear space and sublinear query time complexity. Conversely, inverted lists achieve only the second goal [27], and classical Glimpse achieves both goals but under some restrictive conditions [4]. Finally, we investigate the modifiability of our opportunistic data structure by studying how to choreograph its basic ideas with a dynamic setting thus achieving effective searching and updating time bounds. ∗Dipartimento di Informatica, Università di Pisa, Italy. E-mail: ferragin@di.unipi.it. †Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale, Alessandria, Italy and IMC-CNR, Pisa, Italy. E-mail: manzini@mfn.unipmn.it.",
"title": ""
},
{
"docid": "178dc3f162f0a4bd2a43ae4da72478cc",
"text": "Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and c ontrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a far more effecti ve regulariser which does not suffer from the same limitations.",
"title": ""
},
{
"docid": "5850e29d77f361a56d1580f08f44af07",
"text": "Advances in reflectarrays and array lenses with electronic beam-forming capabilities are enabling a host of new possibilities for these high-performance, low-cost antenna architectures. This paper reviews enabling technologies and topologies of reconfigurable reflectarray and array lens designs, and surveys a range of experimental implementations and achievements that have been made in this area in recent years. The paper describes the fundamental design approaches employed in realizing reconfigurable designs, and explores advanced capabilities of these nascent architectures, such as multi-band operation, polarization manipulation, frequency agility, and amplification. Finally, the paper concludes by discussing future challenges and possibilities for these antennas.",
"title": ""
},
{
"docid": "20e504a115a1448ea366eae408b6391f",
"text": "Clustering algorithms have emerged as an alternative powerful meta-learning tool to accurately analyze the massive volume of data generated by modern applications. In particular, their main goal is to categorize data into clusters such that objects are grouped in the same cluster when they are similar according to specific metrics. There is a vast body of knowledge in the area of clustering and there has been attempts to analyze and categorize them for a larger number of applications. However, one of the major issues in using clustering algorithms for big data that causes confusion amongst practitioners is the lack of consensus in the definition of their properties as well as a lack of formal categorization. With the intention of alleviating these problems, this paper introduces concepts and algorithms related to clustering, a concise survey of existing (clustering) algorithms as well as providing a comparison, both from a theoretical and an empirical perspective. From a theoretical perspective, we developed a categorizing framework based on the main properties pointed out in previous studies. Empirically, we conducted extensive experiments where we compared the most representative algorithm from each of the categories using a large number of real (big) data sets. The effectiveness of the candidate clustering algorithms is measured through a number of internal and external validity metrics, stability, runtime, and scalability tests. In addition, we highlighted the set of clustering algorithms that are the best performing for big data.",
"title": ""
},
{
"docid": "14b7c4f8a3fa7089247f1d4a26186c5d",
"text": "System Dynamics is often used for dealing with dynamically complex issues that are also uncertain. This paper reviews how uncertainty is dealt with in System Dynamics modeling, where uncertainties are located in models, which types of uncertainties are dealt with, and which levels of uncertainty could be handled. Shortcomings of System Dynamics and its practice in dealing with uncertainty are distilled from this review and reframed as opportunities. Potential opportunities for dealing with uncertainty in System Dynamics that are discussed here include (i) dealing explicitly with difficult sorts of uncertainties, (ii) using multi-model approaches for dealing with alternative assumptions and multiple perspectives, (iii) clearly distinguishing sensitivity analysis from uncertainty analysis and using them for different purposes, (iv) moving beyond invariant model boundaries, (v) using multi-method approaches, advanced techniques and new tools, and (vi) further developing and using System Dynamics strands for dealing with deep uncertainty.",
"title": ""
},
{
"docid": "52deb6870cc5e998c9f61132fd763bdd",
"text": "BACKGROUND\nThe burden of malaria is a key challenge to both human and economic development in malaria endemic countries. The impact of malaria can be categorized from three dimensions, namely: health, social and economic. The objective of this study was to estimate the impact of malaria morbidity on gross domestic product (GDP) of Uganda.\n\n\nMETHODS\nThe impact of malaria morbidity on GDP of Uganda was estimated using double-log econometric model. The 1997-2003 time series macro-data used in the analysis were for 28 quarters, i.e. 7 years times 4 quarters per year. It was obtained from national and international secondary sources.\n\n\nRESULTS\nThe slope coefficient for Malaria Index (M) was -0.00767; which indicates that when malaria morbidity increases by one unit, while holding all other explanatory variables constant, per capita GDP decreases by US$0.00767 per year. In 2003 Uganda lost US$ 49,825,003 of GDP due to malaria morbidity. Dividing the total loss of US$49.8 million by a population of 25,827,000 yields a loss in GDP of US$1.93 per person in Uganda in 2003.\n\n\nCONCLUSION\nMalaria morbidity results in a substantive loss in GDP of Uganda. The high burden of malaria leads to decreased long-term economic growth, and works against poverty eradication efforts and socioeconomic development of the country.",
"title": ""
},
{
"docid": "9f3966e64089594b261e1cd9dca8eef1",
"text": "We examine how control over a technology platform can increase profits and innovation. By choosing how much to open and when to bundle enhancements, platform sponsors can influence choices of ecosystem partners. Platform openness invites developer participation but sacrifices direct sales. Bundling enhancements early drives developers away but bundling late delays platform growth. Ironically, developers can prefer sponsored platforms to unmanaged open standards despite giving up their applications. Results can inform antitrust law and innovation strategy.",
"title": ""
},
{
"docid": "a51b57427c5204cb38483baa9389091f",
"text": "Cross-laminated timber (CLT), a new generation of engineered wood product developed initially in Europe, has been gaining popularity in residential and non-residential applications in several countries. Numerous impressive lowand mid-rise buildings built around the world using CLT showcase the many advantages that this product can offer to the construction sector. This article provides basic information on the various attributes of CLT as a product and as structural system in general, and examples of buildings made of CLT panels. A road map for codes and standards implementation of CLT in North America is included, along with an indication of some of the obstacles that can be expected.",
"title": ""
},
{
"docid": "f4e8b75ce3149566edec9eb1f248c226",
"text": "Knowledge Management software is software that integrates. Existing Data sources, process flows, application features from office appliances have to be brought together. There are different standards, consisting of data formats and communication protocols, that address this issue. The WWW and Semantic Web are designed to work on a worldwide scale and define those standards. We transfer the web standards to the desktop szenario, a vision we call Semantic Desktop – a Semantic Web enhanced desktop environment. Central is the idea of taking know-how from the Semantic Web to tackle personal information management. Existing desktop applications (email client, browser, office applications) are integrated, the semantic glue between them expressed using ontologies. We also present the www.gnowsis.org open source project by the DFKI that realizes parts of this vision. It is based on a Semantic Web Server running as desktop service. It was used in experiments and research projects and allows others to experiment. Knowledge management applications can be built on top of it, reducing the implementation cost. 1",
"title": ""
},
{
"docid": "387827eae5fb528506c83d5fb161cd63",
"text": "Distinction work task power-matching control strategy was adapted to excavator for improving fuel efficiency; the accuracy of rotate engine speed at each work task was core to excavator for saving energy. 21t model excavator ZG3210-9 was taken as the study object to analyze the rotate speed setting and control method, linear position feedback throttle motor was employed to control the governor of engine to adjust rotate speed. Improved double closed loop PID method was adapted to control the engine, feedback of rotate speed and throttle position was taken as the input of the PID control mode. Control system was designed in CoDeSys platform with G16 controller, throttle motor control experiment and engine auto control experiment were carried on the excavator for tuning PID parameters. The result indicated that the double closed-loop PID method can take control and set the engine rotate speed automatically with the maximum error of 8 rpm. The linear model between throttle feedback position and rotate speed is established, which provides the control basis for dynamic energy saving of excavator.",
"title": ""
},
{
"docid": "b3feaaf615ec03030a525825de697cce",
"text": "Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc) that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles) from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.",
"title": ""
},
{
"docid": "36b4097c3c394352dc2b7ac25ff4948f",
"text": "An important task of opinion mining is to extract people’s opinions on features of an entity. For example, the sentence, “I love the GPS function of Motorola Droid” expresses a positive opinion on the “GPS function” of the Motorola phone. “GPS function” is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and “no” patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.",
"title": ""
},
{
"docid": "7b5d9b3c795f93d165aa0c535fb1c338",
"text": "In this paper, we address the problem of assessing the overall quality of forgery detection approaches for artificial sweat printed latent fingerprints placed at crime scenes. It is very important to have reliable detection mechanisms tested on manifold characteristics caused for example by different surfaces, printers and during acquisition, avoiding misleading crime scene investigations. Today only a limited number of detection methods exist in the literature and test sets are still limited in size and quality covering all different conditions (influence factors). Based on the recently introduced publicly available StirTrace tool, we enhance the functionality to simulate complex and realistic test sets and discuss how detection approaches can be tuned by further preprocessing and feature selection. Our contributions here are twofold. First, we suggest a benchmarking design in 16-bit domain working in full bit-depth of today's nanometer sensory and propose enhancements for further simulations of sensor and substrate characteristics as well as single and combined scan artifacts (simulated, novel experimental data set of in sum 1.254.000 samples). Second, we benchmark exemplarily two known feature sets on nonsimulated and simulated data and compare findings with additional preprocessing and feature selection. Finally, we summarize lessons learned how good today's detection works and which challenges exist for achieving a high reliability. For the community we provide a tool, which can be used as fundamental basis to simulate influence factors allowing a systematic comparison and benchmarking of results. We also want to motivate further research in the design and tuning of forgery detection approaches.",
"title": ""
},
{
"docid": "a2514f994292481d0fe6b37afe619cb5",
"text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.",
"title": ""
},
{
"docid": "21b9b7995cabde4656c73e9e278b2bf5",
"text": "Topic modeling techniques have been recently applied to analyze and model source code. Such techniques exploit the textual content of source code to provide automated support for several basic software engineering activities. Despite these advances, applications of topic modeling in software engineering are frequently suboptimal. This can be attributed to the fact that current state-of-the-art topic modeling techniques tend to be data intensive. However, the textual content of source code, embedded in its identifiers, comments, and string literals, tends to be sparse in nature. This prevents classical topic modeling techniques, typically used to model natural language texts, to generate proper models when applied to source code. Furthermore, the operational complexity and multi-parameter calibration often associated with conventional topic modeling techniques raise important concerns about their feasibility as data analysis models in software engineering. Motivated by these observations, in this paper we propose a novel approach for topic modeling designed for source code. The proposed approach exploits the basic assumptions of the cluster hypothesis and information theory to discover semantically coherent topics in software systems. Ten software systems from different application domains are used to empirically calibrate and configure the proposed approach. The usefulness of generated topics is empirically validated using human judgment. Furthermore, a case study that demonstrates thet operation of the proposed approach in analyzing code evolution is reported. The results show that our approach produces stable, more interpretable, and more expressive topics than classical topic modeling techniques without the necessity for extensive parameter calibration.",
"title": ""
},
{
"docid": "9b1cf040b59dd25528b58d281e796ad9",
"text": "The rapid development of Web2.0 leads to significant information redundancy. Especially for a complex news event, it is difficult to understand its general idea within a single coherent picture. A complex event often contains branches, intertwining narratives and side news which are all called storylines. In this paper, we propose a novel solution to tackle the challenging problem of storylines extraction and reconstruction. Specifically, we first investigate two requisite properties of an ideal storyline. Then a unified algorithm is devised to extract all effective storylines by optimizing these properties at the same time. Finally, we reconstruct all extracted lines and generate the high-quality story map. Experiments on real-world datasets show that our method is quite efficient and highly competitive, which can bring about quicker, clearer and deeper comprehension to readers.",
"title": ""
},
{
"docid": "d8ae4e4f26adf30aac4898e120da17f6",
"text": "Many organizations in African countries need to reengineer their business processes to improve on efficiency. The general objective of this study was to identify the impact of different factors, including organizational resistance to change on Business Process Reengineering (BPR). The study showed that only 30.4% of BPR projects in Uganda have delivered the intended usable Information Systems. The researchers have identified the factors impacting on BPR and possible causes of BPR failures. The identified emotional response of the users towards the BPR implementation ranges from Acceptance to Testing, Indifference and Anger. Based upon the study findings, the researchers have formulated the set of recommendations for organizations implementing BPR. This paper will be of interest to the organizational managers, BPR implementers and the future researchers in a related area of study.",
"title": ""
},
{
"docid": "c21280fa617bcf55991702211f1fde8b",
"text": "How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. A major motivation for the present work is the unknown reachability of various entanglement classes in quantum experiments. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments-a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.",
"title": ""
}
] |
scidocsrr
|
ae9f33faa61afd98f85846faff9922cb
|
Bundled Super-Coiled Polymer Artificial Muscles: Design, Characterization, and Modeling
|
[
{
"docid": "b142873eed364bd471fbe231cd19c27d",
"text": "Robotics have long sought an actuation technology comparable to or as capable as biological muscle tissue. Natural muscles exhibit a high power-to-weight ratio, inherent compliance and damping, fast action, and a high dynamic range. They also produce joint displacements and forces without the need for gearing or additional hardware. Recently, supercoiled commercially available polymer threads (sewing thread or nylon fishing lines) have been used to create significant mechanical power in a muscle-like form factor. Heating and cooling the polymer threads causes contraction and expansion, which can be utilized for actuation. In this paper, we describe the working principle of supercoiled polymer (SCP) actuation and explore the controllability and properties of these threads. We show that under appropriate environmental conditions, the threads are suitable as a building block for a controllable artificial muscle. We leverage off-the-shelf silver-coated threads to enable rapid electrical heating while the low thermal mass allows for rapid cooling. We utilize both thermal and thermomechanical models for feed-forward and feedback control. The resulting SCP actuator regulates to desired force levels in as little as 28 ms. Together with its inherent stiffness and damping, this is sufficient for a position controller to execute large step movements in under 100 ms. This controllability, high performance, the mechanical properties, and the extremely low material cost are indicative of a viable artificial muscle.",
"title": ""
}
] |
[
{
"docid": "0d9d1a52a789dc5d09c7de24286465bb",
"text": "Text to Speech Synthesis along with the Speech Recognition is widely used throughout the world to enhance the accessibility of the information and enable even the disabled persons to interact with the computers in order to get the potential benefit from this high-tech revolution. In this paper we introduce a bi-lingual novel algorithm for the synthesis of Urdu and Sindhi language text. The devised bi-lingual algorithm uses knowledge based approach along with the hybrid rule based and concatenative acoustic methods to provide efficient and accurate conversion of Urdu and Sindhi text into the high quality speech. The algorithm has been implemented in the VB programming language with a GUI based interface. The proposed system works with high accuracy and has a great potential to be used for variety of applications. The system is versatile enough and can be used for speech recognition also.",
"title": ""
},
{
"docid": "51937670b41f67860eeb5198cf81eb81",
"text": "Content based image retrieval has been around for some time. There are lots of different test data sets, lots of published methods and techniques, and manifold retrieval challenges, where content based image retrieval is of interest. LIRE is a Java library, that provides a simple way to index and retrieve millions of images based on the images' contents. LIRE is robust and well tested and is not only recommended by the websites of ImageCLEF and MediaEval, but is also employed in industry. This paper gives an overview on LIRE, its use, capabilities and reports on retrieval and runtime performance.",
"title": ""
},
{
"docid": "abcad2d522600ffc1c2fb81617296a5d",
"text": "Text miningconcerns applying data mining techniques to unstructured text.Information extraction(IE) is a form of shallow text understanding that locates specific pieces of data in natural language documents, transforming unstructured text into a structured database. This paper describes a system called DISCOTEX, that combines IE and data mining methodologies to perform text mining as well as improve the performance of the underlying extraction system. Rules mined from a database extracted from a corpus of texts are used to predict additional information to extract from future documents, thereby improving the recall of IE. Encouraging results are presented on applying these techniques to a corpus of computer job announcement postings from an Internet newsgroup.",
"title": ""
},
{
"docid": "768749e22e03aecb29385e39353dd445",
"text": "Query logs are of great interest for scientists and companies for research, statistical and commercial purposes. However, the availability of query logs for secondary uses raises privacy issues since they allow the identification and/or revelation of sensitive information about individual users. Hence, query anonymization is crucial to avoid identity disclosure. To enable the publication of privacy-preserved -but still usefulquery logs, in this paper, we present an anonymization method based on semantic microaggregation. Our proposal aims at minimizing the disclosure risk of anonymized query logs while retaining their semantics as much as possible. First, a method to map queries to their formal semantics extracted from the structured categories of the Open Directory Project is presented. Then, a microaggregation method is adapted to perform a semantically-grounded anonymization of query logs. To do so, appropriate semantic similarity and semantic aggregation functions are proposed. Experiments performed using real AOL query logs show that our proposal better retains the utility of anonymized query logs than other related works, while also minimizing the disclosure risk.",
"title": ""
},
{
"docid": "e3d8c147c679860240933834574b1c48",
"text": "In traditional software engineering project management, managers provide focused guidance to a team responsible for producing a specific result in a specified amount of time. Today, however, organizations are increasingly taking a product line approach to software to exploit product commonalities. Software product line organizations have unique practices and project definitions. These unconventional features offer new challenges and directions for traditional project management. How does the traditional concept of a project - a temporary endeavor aimed at creating a unique product or service - hold up under this new paradigm? In this article, we discuss this question, along with how the idea of a \"project\" and project management techniques must expand to fit a product line context. In particular, we show how the \"overall guidelines, policies, and procedures\" that Thayer and Pyster spoke of some years ago remain crucially important in product line organizations today.",
"title": ""
},
{
"docid": "36e2a20efc0f11589de197975c1195cc",
"text": "The conventional sigma-delta (SigmaDelta) modulator structures used in telecommunication and audio applications usually cannot satisfy the requirements of signal processing applications for converting the wideband signals into digital samples accurately. In this paper, system design, analytical aspects and optimization methods of a third order incremental sigma-delta (SigmaDelta) modulator will be discussed and finally the designed modulator will be implemented by switched-capacitor circuits. The design of anti-aliasing filter has been integrated inside of modulator signal transfer function. It has been shown that the implemented 3rd order sigma-delta (SigmaDelta) modulator can be designed for the maximum SNR of 54 dB for minimum over- sampling ratio of 16. The modulator operating principles and its analysis in frequency domain and the topologies for its optimizing have been discussed elaborately. Simulation results on implemented modulator validate the system design and its main parameters such as stability and output dynamic range.",
"title": ""
},
{
"docid": "6f0b8b18689afb9b4ac7466b7898a8e8",
"text": "BACKGROUND\nApproximately 60 million people in the United States live with one of four chronic conditions: heart disease, diabetes, chronic respiratory disease, and major depression. Anxiety and depression are very common comorbidities in COPD and have significant impact on patients, their families, society, and the course of the disease.\n\n\nMETHODS\nWe report the proceedings of a multidisciplinary workshop on anxiety and depression in COPD that aimed to shed light on the current understanding of these comorbidities, and outline unanswered questions and areas of future research needs.\n\n\nRESULTS\nEstimates of prevalence of anxiety and depression in COPD vary widely but are generally higher than those reported in some other advanced chronic diseases. Untreated and undetected anxiety and depressive symptoms may increase physical disability, morbidity, and health-care utilization. Several patient, physician, and system barriers contribute to the underdiagnosis of these disorders in patients with COPD. While few published studies demonstrate that these disorders associated with COPD respond well to appropriate pharmacologic and nonpharmacologic therapy, only a small proportion of COPD patients with these disorders receive effective treatment.\n\n\nCONCLUSION\nFuture research is needed to address the impact, early detection, and management of anxiety and depression in COPD.",
"title": ""
},
{
"docid": "0e7da1ef24306eea2e8f1193301458fe",
"text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "a0d61bdc432aee1fa1482e502928c488",
"text": "An industrial control network is a system of interconnected equipment used to monitor and control physical equipment in industrial environments. These networks differ quite significantly from traditional enterprise networks due to the specific requirements of their operation. Despite the functional differences between industrial and enterprise networks, a growing integration between the two has been observed. The technology in use in industrial networks is also beginning to display a greater reliance on Ethernet and web standards, especially at higher levels of the network architecture. This has resulted in a situation where engineers involved in the design and maintenance of control networks must be familiar with both traditional enterprise concerns, such as network security, as well as traditional industrial concerns such as determinism and response time. This paper highlights some of the differences between enterprise and industrial networks, presents a brief history of industrial networking, gives a high level explanation of some operations specific to industrial networks, provides an overview of the popular protocols in use and describes current research topics. The purpose of this paper is to serve as an introduction to industrial control networks, aimed specifically at those who have had minimal exposure to the field, but have some familiarity with conventional computer networks.",
"title": ""
},
{
"docid": "4e23abcd1746d23c54e36c51e0a59208",
"text": "Recognizing actions is one of the important challenges in computer vision with respect to video data, with applications to surveillance, diagnostics of mental disorders, and video retrieval. Compared to other data modalities such as documents and images, processing video data demands orders of magnitude higher computational and storage resources. One way to alleviate this difficulty is to focus the computations to informative (salient) regions of the video. In this paper, we propose a novel global spatio-temporal selfsimilarity measure to score saliency using the ideas of dictionary learning and sparse coding. In contrast to existing methods that use local spatio-temporal feature detectors along with descriptors (such as HOG, HOG3D, HOF, etc.), dictionary learning helps consider the saliency in a global setting (on the entire video) in a computationally efficient way. We consider only a small percentage of the most salient (least self-similar) regions found using our algorithm, over which spatio-temporal descriptors such as HOG and region covariance descriptors are computed. The ensemble of such block descriptors in a bag-of-features framework provides a holistic description of the motion sequence which can be used in a classification setting. Experiments on several benchmark datasets in video based action classification demonstrate that our approach performs competitively to the state of the art.",
"title": ""
},
{
"docid": "6f80404dd9b280f72cacfa8b03c13357",
"text": "The step of urbanization and modern civilization fosters different functional zones in a city, such as residential areas, business districts, and educational areas. In a metropolis, people commute between these functional zones every day to engage in different socioeconomic activities, e.g., working, shopping, and entertaining. In this paper, we propose a data-driven framework to discover functional zones in a city. Specifically, we introduce the concept of latent activity trajectory (LAT), which captures socioeconomic activities conducted by citizens at different locations in a chronological order. Later, we segment an urban area into disjointed regions according to major roads, such as highways and urban expressways. We have developed a topic-modeling-based approach to cluster the segmented regions into functional zones leveraging mobility and location semantics mined from LAT. Furthermore, we identify the intensity of each functional zone using Kernel Density Estimation. Extensive experiments are conducted with several urban scale datasets to show that the proposed framework offers a powerful ability to capture city dynamics and provides valuable calibrations to urban planners in terms of functional zones.",
"title": ""
},
{
"docid": "d473154967f8fc522bd0d2a95f29bdc3",
"text": "This paper presents a model for Virtual Network Function (VNF) placement and chaining across Cloud environments. We propose a new analytical approach for joint VNFs placement and traffic steering for complex service chains and different VNF types. A custom greedy algorithm is also proposed to compare with our solution. Performance evaluation results show that our approach is fast and stable and has a execution time that essentially depends only on the NFV infrastructure size.",
"title": ""
},
{
"docid": "df3d9037bff693c574a03875e7f4f0ea",
"text": "We study the problem of imitation learning from demonstrations of multiple coordinating agents. One key challenge in this setting is that learning a good model of coordination can be difficult, since coordination is often implicit in the demonstrations and must be inferred as a latent variable. We propose a joint approach that simultaneously learns a latent coordination model along with the individual policies. In particular, our method integrates unsupervised structure learning with conventional imitation learning. We illustrate the power of our approach on a difficult problem of learning multiple policies for finegrained behavior modeling in team sports, where different players occupy different roles in the coordinated team strategy. We show that having a coordination model to infer the roles of players yields substantially improved imitation loss compared to conventional baselines.",
"title": ""
},
{
"docid": "d7527aeeb5f26f23930b8d674beb0a13",
"text": "A three-part investigation was conducted to explore the meaning of color preferences. Phase 1 used a Q-sort technique to assess intra-individual stability of preferences over 5 wk. Phase 2 used principal components analysis to discern the manner in which preferences were being made. Phase 3 used canonical correlation to evaluate a hypothesized relationship between color preferences and personality, with five scales of the Personality Research Form serving as the criterion measure. Munsell standard papers, a standard light source, and a color vision test were among control devices applied. There were marked differences in stability of color preferences. Sex differences in intra-individual stability were also apparent among the 90 subjects. An interaction of hue and lightness appeared to underlie such judgments when saturation was kept constant. An unexpected breakdown in control pointed toward the possibly powerful effect of surface finish upon color preference. No relationship to five manifest needs were found. It was concluded that the beginning steps had been undertaken toward psychometric development of a reliable technique for the measurement of color preference.",
"title": ""
},
{
"docid": "157f5ef02675b789df0f893311a5db72",
"text": "We present a novel spectral shading model for human skin. Our model accounts for both subsurface and surface scattering, and uses only four parameters to simulate the interaction of light with human skin. The four parameters control the amount of oil, melanin and hemoglobin in the skin, which makes it possible to match specific skin types. Using these parameters we generate custom wavelength dependent diffusion profiles for a two-layer skin model that account for subsurface scattering within the skin. These diffusion profiles are computed using convolved diffusion multipoles, enabling an accurate and rapid simulation of the subsurface scattering of light within skin. We combine the subsurface scattering simulation with a Torrance-Sparrow BRDF model to simulate the interaction of light with an oily layer at the surface of the skin. Our results demonstrate that this four parameter model makes it possible to simulate the range of natural appearance of human skin including African, Asian, and Caucasian skin types.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
},
{
"docid": "56321ec6dfc3d4c55fc99125e942cf44",
"text": "The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as cross-validation or percentage splits without proper instance definition – prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most reallife settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.",
"title": ""
},
{
"docid": "1b1953e3dd28c67e7a8648392422df88",
"text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.",
"title": ""
},
{
"docid": "eb71ba791776ddfe0c1ddb3dc66f6e06",
"text": "An enterprise resource planning (ERP) is an enterprise-wide application software package that integrates all necessary business functions into a single system with a common database. In order to implement an ERP project successfully in an organization, it is necessary to select a suitable ERP system. This paper presents a new model, which is based on linguistic information processing, for dealing with such a problem. In the study, a similarity degree based algorithm is proposed to aggregate the objective information about ERP systems from some external professional organizations, which may be expressed by different linguistic term sets. The consistency and inconsistency indices are defined by considering the subject information obtained from internal interviews with ERP vendors, and then a linear programming model is established for selecting the most suitable ERP system. Finally, a numerical example is given to demonstrate the application of the",
"title": ""
}
] |
scidocsrr
|
8fc90750c972345932fd8603895db10e
|
LieNet: Real-time Monocular Object Instance 6D Pose Estimation
|
[
{
"docid": "ea236e7ab1b3431523c01c51a3186009",
"text": "Analysis-by-synthesis has been a successful approach for many tasks in computer vision, such as 6D pose estimation of an object in an RGB-D image which is the topic of this work. The idea is to compare the observation with the output of a forward process, such as a rendered image of the object of interest in a particular pose. Due to occlusion or complicated sensor noise, it can be difficult to perform this comparison in a meaningful way. We propose an approach that \"learns to compare\", while taking these difficulties into account. This is done by describing the posterior density of a particular object pose with a convolutional neural network (CNN) that compares observed and rendered images. The network is trained with the maximum likelihood paradigm. We observe empirically that the CNN does not specialize to the geometry or appearance of specific objects. It can be used with objects of vastly different shapes and appearances, and in different backgrounds. Compared to state-of-the-art, we demonstrate a significant improvement on two different datasets which include a total of eleven objects, cluttered background, and heavy occlusion.",
"title": ""
},
{
"docid": "e19b6cd095129b42be0bf0fe3f3d4a96",
"text": "This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state-of-the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.",
"title": ""
}
] |
[
{
"docid": "7a1083d9d292ba3f240c17df0d149a52",
"text": "0377-2217/$ see front matter 2012 Elsevier B.V. A doi:10.1016/j.ejor.2012.01.019 ⇑ Corresponding author. Tel.: +31 50 363 8617; fax E-mail addresses: w.romeijnders@rug.nl (W. Rom (R. Teunter), vanjaarsveld@ese.eur.nl (W. van Jaarsvel 1 Tel.: +31 50 363 7020; fax: +31 53 489 2032. 2 Tel.: +31 10 408 1472; fax: +31 10 408 9640. Forecasting spare parts demand is notoriously difficult, as demand is typically intermittent and lumpy. Specialized methods such as that by Croston are available, but these are not based on the repair operations that cause the intermittency and lumpiness of demand. In this paper, we do propose a method that, in addition to the demand for spare parts, considers the type of component repaired. This two-step forecasting method separately updates the average number of parts needed per repair and the number of repairs for each type of component. The method is tested in an empirical, comparative study for a service provider in the aviation industry. Our results show that the two-step method is one of the most accurate methods, and that it performs considerably better than Croston’s method. Moreover, contrary to other methods, the two-step method can use information on planned maintenance and repair operations to reduce forecasts errors by up to 20%. We derive further analytical and simulation results that help explain the empirical findings. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "45252c6ffe946bf0f9f1984f60ffada6",
"text": "Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation model that represents the Gibbs distribution using the arg max of randomly perturbed encoder. We subsequently apply the direct loss minimization technique to propagate gradients through the reparameterized arg max. The resulting gradient is estimated by the difference of the encoder gradients that are evaluated in two arg max predictions.",
"title": ""
},
{
"docid": "c31dbdee3c36690794f3537c61cfc1e3",
"text": "Shape memory alloy (SMA) actuators, which have ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Although the number of applications is increasing, there has been limited success in precise motion control since the systems are disturbed by unknown factors beside their inherent nonlinear hysteresis or the surrounding environment of the systems is changed. This paper presents a new development of SMA position control system by using self-tuning fuzzy PID controller. The use of this control algorithm is to tune the parameters of the PID controller by integrating fuzzy inference and producing a fuzzy adaptive PID controller that can be used to improve the control performance of nonlinear systems. The experimental results of position control of SMA actuators using conventional and self tuning fuzzy PID controller are both included in this paper",
"title": ""
},
{
"docid": "e3283985a648fe66e75c544261882afa",
"text": "We present a simple semi-supervised relation extraction system with large-scale word clustering. We focus on systematically exploring the effectiveness of different cluster-based features. We also propose several statistical methods for selecting clusters at an appropriate level of granularity. When training on different sizes of data, our semi-supervised approach consistently outperformed a state-of-the-art supervised baseline system.",
"title": ""
},
{
"docid": "5f418ea007ac3f0bbd47002a91ea4448",
"text": "The study of taxonomies and hypernymy relations has been extensive on the Natural Language Processing (NLP) literature. However, the evaluation of taxonomy learning approaches has been traditionally troublesome, as it mainly relies on ad-hoc experiments which are hardly reproducible and manually expensive. Partly because of this, current research has been lately focusing on the hypernymy detection task. In this paper we reflect on this trend, analyzing issues related to current evaluation procedures. Finally, we propose two potential avenues for future work so that is-a relations and resources based on them play a more important role in downstream NLP applications.",
"title": ""
},
{
"docid": "ca769d200ccd4a0e122daeb171efa0de",
"text": "A color space defined by the fundamental spectral sensitivity functions of the human visual system is used to assist in the design of computer graphics displays for color-deficient users. The functions are derived in terms of the CIE standard observer color-matching functions. The Farnsworth-Munsell 100-hue test, a widely used color vision test administered using physical color samples, is then implemented on a digitally controlled color television monitor. The flexibility of this computer graphics medium is then used to extend the Farnsworth-Munsell test in a way that improves the specificity of the diagnoses rendered by the test. The issue of how the world appears to color-deficient observers is addressed, and a full-color image is modified to represent a color-defective view of the scene. Specific guidelines are offered for the design of computer graphics displays that will accommodate almost all color-deficient users.<<ETX>>",
"title": ""
},
{
"docid": "cefb8cd5d5e934ccebfeb7691db916bd",
"text": "Visual speech recognition remains a challenging topic due to various speaking characteristics. This paper proposes a new approach for lipreading to recognize isolated speech segments (words, digits, phrases, etc.) using both of 2D image and depth data. The process of the proposed system is divided into three consecutive steps, namely, mouth region tracking and extraction, motion and appearance descriptors (HOG and MBH) computing, and classification using the Support Vector Machine (SVM) method. To evaluate the proposed approach, three public databases (MIRACL-VC, Ouluvs, and CUAVE) were used. Speaker dependent and speaker independent settings were considered in the evaluation experiments. The obtained recognition results demonstrate that lipreading can be performed effectively, and the proposed approach outperforms recent works in the literature for the speaker dependent setting while being competitive for the speaker independent setting.",
"title": ""
},
{
"docid": "a40e71e130f31450ce1e60d9cd4a96be",
"text": "Progering® is the only intravaginal ring intended for contraception therapies during lactation. It is made of silicone and releases progesterone through the vaginal walls. However, some drawbacks have been reported in the use of silicone. Therefore, ethylene vinyl acetate copolymer (EVA) was tested in order to replace it. EVA rings were produced by a hot-melt extrusion procedure. Swelling and degradation assays of these matrices were conducted in different mixtures of ethanol/water. Solubility and partition coefficient of progesterone were measured, together with the initial hormone load and characteristic dimensions. A mathematical model was used to design an EVA ring that releases the hormone at specific rate. An EVA ring releasing progesterone in vitro at about 12.05 ± 8.91 mg day−1 was successfully designed. This rate of release is similar to that observed for Progering®. In addition, it was observed that as the initial hormone load or ring dimension increases, the rate of release also increases. Also, the device lifetime was extended with a rise in the initial amount of hormone load. EVA rings could be designed to release progesterone in vitro at a rate of 12.05 ± 8.91 mg day−1. This ring would be used in contraception therapies during lactation. The use of EVA in this field could have initially several advantages: less initial and residual hormone content in rings, no need for additional steps of curing or crosslinking, less manufacturing time and costs, and the possibility to recycle the used rings.",
"title": ""
},
{
"docid": "5e60c55f419c7d62f4eeb9165e7f107c",
"text": "Background : Agile software development has become a popular way of developing software. Scrum is the most frequently used agile framework, but it is often reported to be adapted in practice. Objective: Thus, we aim to understand how Scrum is adapted in different contexts and what are the reasons for these changes. Method : Using a structured interview guideline, we interviewed ten German companies about their concrete usage of Scrum and analysed the results qualitatively. Results: All companies vary Scrum in some way. The least variations are in the Sprint length, events, team size and requirements engineering. Many users varied the roles, effort estimations and quality assurance. Conclusions: Many variations constitute a substantial deviation from Scrum as initially proposed. For some of these variations, there are good reasons. Sometimes, however, the variations are a result of a previous non-agile, hierarchical organisation.",
"title": ""
},
{
"docid": "b5f356974c0272e04b6e4844b297684e",
"text": "The development and spread of chloroquine-resistant Plasmodium falciparum threatens the health of millions of people and poses a major challenge to the control of malaria. Monitoring drug efficacy in 2-year intervals is an important tool for establishing rational anti-malarial drug policies. This study addresses the therapeutic efficacy of artemether-lumefantrine (AL) for the treatment of Plasmodium falciparum in southwestern Ethiopia. A 28-day in vivo therapeutic efficacy study was conducted from September to December, 2011, in southwestern Ethiopia. Participants were selected for the study if they were older than 6 months, weighed more than 5 kg, symptomatic, and had microscopically confirmed, uncomplicated P. falciparum. All 93 eligible patients were treated with AL and followed for 28 days. For each patient, recurrence of parasitaemia, the clinical condition, and the presence of gametoytes were assessed on each visit during the follow-up period. PCR was conducted to differentiate re-infection from recrudescence. Seventy-four (83.1 %) of the study subjects cleared fever by day 1, but five (5.6 %) had fever at day 2. All study subjects cleared fever by day 3. Seventy-nine (88.8 %) of the study subjects cleared the parasite by day 1, seven (7.9 %) were blood-smear positive by day 1, and three (3.4 %) were positive by day 2. In five patients (5.6 %), parasitaemia reappeared during the 28-day follow-up period. From these five, one (1.1 %) was a late clinical failure, and four (4.5 %) were a late parasitological failure. On the day of recurrent parasitaemia, the level of chloroquine/desethylchloroquine (CQ-DCQ) was above the minimum effective concentration (>100 ng/ml) in one patient. There were 84 (94.4 %) adequate clinical and parasitological responses. The 28-day, PCR-uncorrected (unadjusted by genotyping) cure rate was 84 (94.4 %), whereas the 28-day, PCR-corrected cure rate was 87 (97.8 %). Of the three re-infections, two (2.2 %) were due to P. falciparum and one (1.1 %) was due to P. vivax. From 89 study subjects, 12 (13.5 %) carried P. falciparum gametocytes at day 0, whereas the 28-day gametocyte carriage rate was 2 (2.2 %). Years after the introduction of AL in Ethiopia, the finding of this study is that AL has been highly effective in the treatment of uncomplicated P. falciparum malaria and reducing gametocyte carriage in southwestern Ethiopia.",
"title": ""
},
{
"docid": "de21af25cede39d42c1064e626c621cb",
"text": "This study examined the polyphenol composition and antioxidant properties of methanolic extracts from amaranth, quinoa, buckwheat and wheat, and evaluated how these properties were affected following two types of processing: sprouting and baking. The total phenol content amongst the seed extracts were significantly higher in buckwheat (323.4 mgGAE/100 g) and decreased in the following order: buckwheat > quinoa > wheat > amaranth. Antioxidant capacity, measured by the radical 2,2-diphenyl-1-picylhydrazyl scavenging capacity and the ferric ion reducing antioxidant power assays was also highest for buckwheat seed extract (p < 0.01). Total phenol content and antioxidant activity was generally found to increase with sprouting, and a decrease in levels was observed following breadmaking. Analysis by liquid chromatography coupled with diode array detector revealed the presence of phenolic acids, catechins, flavanol, flavone and flavonol glycosides. Overall, quinoa and buckwheat seeds and sprouts represent potential rich sources of polyphenol compounds for enhancing the nutritive properties of foods such as gluten-free breads. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c9979471d76ea95541efd1950391db15",
"text": "High-data-rate sensors, such as video cameras, are becoming ubiquitous in the Internet of Things. This article describes GigaSight, an Internet-scale repository of crowd-sourced video content, with strong enforcement of privacy preferences and access controls. The GigaSight architecture is a federated system of VM-based cloudlets that perform video analytics at the edge of the Internet, thus reducing the demand for ingress bandwidth into the cloud. Denaturing, which is an owner-specific reduction in fidelity of video content to preserve privacy, is one form of analytics on cloudlets. Content-based indexing for search is another form of cloudlet-based analytics. This article is part of a special issue on smart spaces.",
"title": ""
},
{
"docid": "43f5d21de3421564a7d5ecd6c074ea0a",
"text": "Epithelial-mesenchymal transition (EMT) is an important process in embryonic development, fibrosis, and cancer metastasis. During cancer progression, the activation of EMT permits cancer cells to acquire migratory, invasive, and stem-like properties. A growing body of evidence supports the critical link between EMT and cancer stemness. However, contradictory results have indicated that the inhibition of EMT also promotes cancer stemness, and that mesenchymal-epithelial transition, the reverse process of EMT, is associated with the tumor-initiating ability required for metastatic colonization. The concept of 'intermediate-state EMT' provides a possible explanation for this conflicting evidence. In addition, recent studies have indicated that the appearance of 'hybrid' epithelial-mesenchymal cells is favorable for the establishment of metastasis. In summary, dynamic changes or plasticity between the epithelial and the mesenchymal states rather than a fixed phenotype is more likely to occur in tumors in the clinical setting. Further studies aimed at validating and consolidating the concept of intermediate-state EMT and hybrid tumors are needed for the establishment of a comprehensive profile of cancer metastasis.",
"title": ""
},
{
"docid": "1de10e40580ba019045baaa485f8e729",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
},
{
"docid": "5717c8148c93b18ec0e41580a050bf3a",
"text": "Verifiability is one of the core editing principles in Wikipedia, editors being encouraged to provide citations for the added content. For a Wikipedia article, determining the citation span of a citation, i.e. what content is covered by a citation, is important as it helps decide for which content citations are still missing. We are the first to address the problem of determining the citation span in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered by a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a finegrained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics.",
"title": ""
},
{
"docid": "5a011a87ce3f37dc6b944d2686fa2f73",
"text": "Agents are self-contained objects within a software model that are capable of autonomously interacting with the environment and with other agents. Basing a model around agents (building an agent-based model, or ABM) allows the user to build complex models from the bottom up by specifying agent behaviors and the environment within which they operate. This is often a more natural perspective than the system-level perspective required of other modeling paradigms, and it allows greater flexibility to use agents in novel applications. This flexibility makes them ideal as virtual laboratories and testbeds, particularly in the social sciences where direct experimentation may be infeasible or unethical. ABMs have been applied successfully in a broad variety of areas, including heuristic search methods, social science models, combat modeling, and supply chains. This tutorial provides an introduction to tools and resources for prospective modelers, and illustrates ABM flexibility with a basic war-gaming example.",
"title": ""
},
{
"docid": "2a9d399edc3c2dcc153d966760f38d80",
"text": "Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is over a computer network and the other is on a shared memory system. We establish an ergodic convergence rate O(1/ √ K) for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by √ K (K is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.",
"title": ""
},
{
"docid": "6e9c02d0e1ee2de4b3077257aae2c4a8",
"text": "This study investigated how various child-internal and child-external factors predict English L2 children’s acquisition outcomes for vocabulary size and accuracy with verb morphology. The children who participated (N=169) were between 4;10 and 7;0 years old (mean = 5;10), had between 3 to 62 months of exposure to English (mean = 20 months), and were from newcomer families to Canada. Results showed that factors such as language aptitude (phonological short term memory and analytic reasoning), age, L1 typology, length of exposure to English, and richness of the child’s English environment were significant predictors of variation in children’s L2 outcomes. However, on balance, childinternal factors explained more of the variance in outcomes than child-external factors. Relevance of these findings for Usage-Based theory of language acquisition is discussed.",
"title": ""
},
{
"docid": "cf0b98dfd188b7612577c975e08b0c92",
"text": "Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.",
"title": ""
},
{
"docid": "7490e0039b8060ec1a4c27405a20a513",
"text": "Trajectories obtained from GPS-enabled taxis grant us an opportunity to not only extract meaningful statistics, dynamics and behaviors about certain urban road users, but also to monitor adverse and/or malicious events. In this paper we focus on the problem of detecting anomalous routes by comparing against historically “normal” routes. We propose a real-time method, iBOAT, that is able to detect anomalous trajectories “on-the-fly”, as well as identify which parts of the trajectory are responsible for its anomalousness. We evaluate our method on a large dataset of taxi GPS logs and verify that it has excellent accuracy (AUC ≥ 0.99) and overcomes many of the shortcomings of other state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
5675e09ac9818dc95bb3b09fec5efa0a
|
Long short-term memory model for traffic congestion prediction with online open data
|
[
{
"docid": "7071a178d42011a39145066da2d08895",
"text": "This paper discusses the trend modeling for traffic time series. First, we recount two types of definitions for a long-term trend that appeared in previous studies and illustrate their intrinsic differences. We show that, by assuming an implicit temporal connection among the time series observed at different days/locations, the PCA trend brings several advantages to traffic time series analysis. We also describe and define the so-called short-term trend that cannot be characterized by existing definitions. Second, we sequentially review the role that trend modeling plays in four major problems in traffic time series analysis: abnormal data detection, data compression, missing data imputation, and traffic prediction. The relations between these problems are revealed, and the benefit of detrending is explained. For the first three problems, we summarize our findings in the last ten years and try to provide an integrated framework for future study. For traffic prediction problem, we present a new explanation on why prediction accuracy can be improved at data points representing the short-term trends if the traffic information from multiple sensors can be appropriately used. This finding indicates that the trend modeling is not only a technique to specify the temporal pattern but is also related to the spatial relation of traffic time series.",
"title": ""
}
] |
[
{
"docid": "cd36a4e57a446e25ae612cdc31f6293e",
"text": "Privacy and security concerns can prevent sharing of data, derailing data mining projects. Distributed knowledge discovery, if done correctly, can alleviate this problem. The key is to obtain valid results, while providing guarantees on the (non)disclosure of data. We present a method for k-means clustering when different sites contain different attributes for a common set of entities. Each site learns the cluster of each entity, but learns nothing about the attributes at other sites.",
"title": ""
},
{
"docid": "9d019128d54b83311852627aa8c8f82b",
"text": "Minimal pairs bear great benefits in pronunciation teaching and learning which have long been of fruitful use. However, the full use of these pairs has not yet been made in the setting of Hung Vuong University. This paper sought to examine possible problems facing English non-majored students at Hung Vuong University in recognizing and producing English discrete sounds as well as in what way and to what extent do minimal pairs facilitate the teaching and learning of English discrete sounds. The data were collected both quantitatively and qualitatively from various sources: questionnaires for and interviews with both the teacher and student subjects, tests of students’ sound recognitions, regular real-time observations, audio recordings of students’ sound productions, and spectrogram-based analyses of these recordings. The findings revealed that virtually all of the student subjects face the six pronunciation problems: omitting the word-final consonant, adding the word-final /s/ to English words not ending in /s/, adding the schwa /6/ in the middle of a consonant cluster, mispronouncing strange sounds to Vietnamese people, e.g. /t/ and /d/, failing to differentiate between long and short vowels, and failing to differentiate between voiced and voiceless consonants. Both the student and the teacher subjects also show high appreciation of the pedagogical effectiveness of minimal pairs when employed either as a teaching or learning tool within the extent to which English discrete sounds are concerned.",
"title": ""
},
{
"docid": "b6e1ab2729f1a9d195f85a5b0cfad41c",
"text": "Purpose – The paper aims to present a conceptual model that better defines critical success factors to ERP implementation organized with the technology, organization and environment (TOE) framework. The paper also adds to current literature the critical success factor of trust with the vendor, system and consultant which has largely been ignored in the past. Design/methodology/approach – The paper uses past literature and theoretical and conceptual framework development to illustrate a new conceptual model that incorporates critical success factors that have both been empirically tied to ERP implementation success in the past and new insights into how trust impacts ERP implementation success. Findings – The paper finds a lack of research depicted in how trust impacts ERP implementation success and likewise a lack of a greater conceptual model organized to provide insight into ERP implementation success. Originality/value – The paper proposes a holistic conceptual framework for ERP implementation success and discusses the impact that trust with the vendor, system and consultant has on ERP implementation success.",
"title": ""
},
{
"docid": "bc2a32d116e79d0120da6ce81b97ce09",
"text": "Naeem Akhtar MS Scholar; Department of Management Sciences, COMSATS Institute of Information Technology, Sahiwal, Pakistan Saqib Ali and Muhammad Salman MS Scholar; Department of Management Sciences, Bahauddin Zakariya University Sub Campus Sahiwal, Pakistan Asad-Ur-Rehman MS Scholar; Department of Management Sciences, COMSATS Institute of Information Technology, Sahiwal, Pakistan Aqsa Ijaz BBA (Hons), Department of Management Sciences, University of Education Lahore (Okara Campus), Pakistan",
"title": ""
},
{
"docid": "d50b6e7c130080eba98bf4437c333f16",
"text": "In this paper we provide a brief review of how out-of-sample methods can be used to construct tests that evaluate a time-series model's ability to predict. We focus on the role that parameter estimation plays in constructing asymptotically valid tests of predictive ability. We illustrate why forecasts and forecast errors that depend upon estimated parameters may have statistical properties that differ from those of their population counterparts. We explain how to conduct asymptotic inference, taking due account of dependence on estimated parameters.",
"title": ""
},
{
"docid": "d7c0b0261547590d405e118301651b1f",
"text": "This paper reports on the Event StoryLine Corpus (ESC) v0.9, a new benchmark dataset for the temporal and causal relation detection. By developing this dataset, we also introduce a new task, the StoryLine Extraction from news data, which aims at extracting and classifying events relevant for stories, from across news documents spread in time and clustered around a single seminal event or topic. In addition to describing the dataset, we also report on three baselines systems whose results show the complexity of the task and suggest directions for the development of more robust systems.",
"title": ""
},
{
"docid": "512cbe93bf292c5e4836d50b8aaac6b7",
"text": "This paper describes a new approach to the problem of generating the class of all geodetic graphs homeomorphic to a given geodetic one. An algorithmic procedure is elaborated to carry out a systematic finding of such a class of graphs. As a result, the enumeration of the class of geodetic graphs homeomorphic to certain Moore graphs has been performed.",
"title": ""
},
{
"docid": "24e2c8f8b3de74653532e297ce56cdf2",
"text": "We describe a method of incorporating taskspecific cost functions into standard conditional log-likelihood (CLL) training of linear structured prediction models. Recently introduced in the speech recognition community, we describe the method generally for structured models, highlight connections to CLL and max-margin learning for structured prediction (Taskar et al., 2003), and show that the method optimizes a bound on risk. The approach is simple, efficient, and easy to implement, requiring very little change to an existing CLL implementation. We present experimental results comparing with several commonly-used methods for training structured predictors for named-entity recognition.",
"title": ""
},
{
"docid": "a7683aa1cdb5cec5c00de191463acd8b",
"text": "A novel PN diode decoding method for 3D NAND Flash is proposed. The PN diodes are fabricated self-aligned at the source side of the Vertical Gate (VG) 3D NAND architecture. Contrary to the previous 3D NAND approaches, there is no need to fabricate plural string select (SSL) transistors inside the array, thus enabling a highly symmetrical and scalable cell structure. A novel three-step programming pulse waveform is integrated to implement the program-inhibit method, capitalizing on that the PN diodes can prevent leakage of the self-boosted channel potential. A large program-disturb-free window >5V is demonstrated.",
"title": ""
},
{
"docid": "48842e5bf95700acf2b1bb18771aeb00",
"text": "We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61. We use this algorithm to find better approximation algorithms for the capacitated facility location problem with soft capacities and for a common generalization of the k-median and facility location problems. We also prove a lower bound of 1+2/e on the approximability of the k-median problem. At the end, we present a discussion about the techniques we have used in the analysis of our algorithm, including a computer-aided method for proving bounds on the approximation factor.",
"title": ""
},
{
"docid": "a44e95fe672a4468b42fe881cd1697fd",
"text": "In this paper, we present a maximum power point tracker and estimator for a PV system to estimate the point of maximum power, to track this point and force it to reach this point in finite time and to stay there for all future time in order to provide the maximum power available to the load. The load will be composed of a battery bank. This is obtained by controlling the duty cycle of a DC-DC converter using sliding mode control. The sliding mode controller is given the estimated maximum power point as a reference for it to track that point and force the PV system to operate in this point. This method has the advantage that it will guarantee the maximum output power possible by the array configuration while considering the dynamic parameters temperature and solar irradiance and delivering more power to charge the battery. The procedure of designing, simulating and results are presented in this paper.",
"title": ""
},
{
"docid": "51545a0f8c2e8bfc3306aa0cdb6a446a",
"text": "Automatic online and offline signature recognition and verification is becoming ubiquitous in person identification and authentication problems, in various domains requiring different levels of security. There has recently been an increasing interest in developing such systems, with several views on which are the best discriminator features. This paper presents a new offline signature verification system, which considers a new combination of previously used features and introduces two new distance-based ones. A new feature grouping is presented. We have experimented with two classification methods and two feature selection techniques. The best performance so far was obtained with the Naïve Bayes classifier on the reduced feature set (through feature selection).",
"title": ""
},
{
"docid": "0326178ab59983db61eb5dfe0e2b25a4",
"text": "Article history: Received 9 September 2008 Received in revised form 16 April 2009 Accepted 14 May 2009",
"title": ""
},
{
"docid": "6751464cdb651ca7a801b9cdaddce233",
"text": "Latency- and power-aware offloading is a promising issue in the field of mobile cloud computing today. To provide latency-aware offloading, the concept of cloudlet has evolved. However, offloading an application to the most appropriate cloudlet is still a major challenge. This paper has proposed an application-aware cloudlet selection strategy for multi-cloudlet scenario. Different cloudlets are able to process different types of applications. When a request comes from a mobile device for offloading a task, the application type is verified first. According to the application type, the most suitable cloudlet is selected among multiple cloudlets present near the mobile device. By offloading computation using the proposed strategy, the energy consumption of mobile terminals can be reduced as well as latency in application execution can be decreased. Moreover, the proposed strategy can balance the load of the system by distributing the processes to be offloaded in various cloudlets. Consequently, the probability of putting all loads on a single cloudlet can be dealt for load balancing. The proposed algorithm is implemented in the mobile cloud computing laboratory of our university. In the experimental analyses, the sorting and searching processes, numerical operations, game and web service are considered as the tasks to be offloaded to the cloudlets based on the application type. The delays involved in offloading various applications to the cloudlets located at the university laboratory, using proposed algorithm are presented. The mathematical models of total power consumption and delay for the proposed strategy are also developed in this paper.",
"title": ""
},
{
"docid": "e11bf8903ea7b6e5b7ad384451178c92",
"text": "The increasing availability of online information has triggered an intensive research in the area of automatic text summarization within the Natural Language Processing (NLP). Text summarization reduces the text by removing the less useful information which helps the reader to find the required information quickly. There are many kinds of algorithms that can be used to summarize the text. One of them is TF-IDF (Term Frequency-Inverse Document Frequency). This research aimed to produce an automatic text summarizer implemented with TF-IDF algorithm and to compare it with other various online source of automatic text summarizer. To evaluate the summary produced from each summarizer, The F-Measure as the standard comparison value had been used. The result of this research produces 67% of accuracy with three data samples which are higher compared to the other online summarizers.",
"title": ""
},
{
"docid": "3f7207df2fe2ee320dd268311051d511",
"text": "In this article, we study the impact of such eye-hand visibility mismatch on selection tasks performed with hand-rooted pointing techniques. We propose a new mapping for ray control, called Ray Casting from the Eye (RCE), which attempts to overcome this mismatch's negative effects. In essence, RCE combines the benefits of image-plane techniques (the absence of visibility mismatch and continuity of the ray movement in screen space) with the benefits of ray control through hand rotation (requiring less physical hand movement). This article builds on a previous study on the impact of eye-to-hand separation on 3D pointing selection. Here, we provide empirical evidence that RCE clearly outperforms classic ray casting (RC) selection, both in sparse and cluttered scenes.",
"title": ""
},
{
"docid": "23d9479a38afa6e8061fe431047bed4e",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "eb14c92a5bcbe8bf28bb70b492646a1c",
"text": "While seemingly incompatible, combining large-scale global software development and agile practices is a challenge undertaken by many companies. Case study reports on the successful use of agile practices in small distributed projects already exist. How these practices could be applied to larger projects, however, remains unstudied. This paper reports a case study on agile practices in a 40- person development organization distributed between Norway and Malaysia. Based on seven interviews in the development organization, we describe how Scrum practices were successfully applied, e.g., using teleconference and Web cameras for daily scrum meetings, synchronized 4- week sprints and weekly scrum-of-scrums. Additional agility supporting practices for distributed projects were identified, e.g., frequent visits, unofficial distributed meetings and annual gatherings are described.",
"title": ""
},
{
"docid": "37a8fe29046ec94d54e62f202a961129",
"text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.",
"title": ""
},
{
"docid": "d8e8c3ecdb63dcda7fc7e67a02479e07",
"text": "It has been known that salt-sensitivity of blood pressure is defined genetically as well as can be developed secondary to either decreased renal function or by influence of other environmental factors. The aim of the study was to evaluate the possible mechanism for the development of salt-sensitive essential hypertension in the population of Georgia. The Case-Control study included 185 subjects, 94 cases with Essential Hypertension stage I (JNC7) without prior antihypertensive treatment, and 91 controls. Salt-sensitivity test was used to divide both case and control groups into salt-sensitive (n=112) and salt-resistant (n=73) subgroups. Endogenous cardiotonic steroids, sodium and PRA were measured in blood and urine samples at the different sodium conditions. Determinations of circulating levels of endogenous sodium pump inhibitors and PRA were carried out using the ELISA and RIA methods. Descriptive statistics were used to analyze the data. Differences in variables between sodium conditions were assessed using paired t-tests. Salt-sensitivity was found in 60.5% of total population investigated, with higher frequency in females. Salt-sensitivity positively correlated with age in females (r=0.262, p<0.01). Statistically significant positive correlation was found between 24 hour urine sodium concentration changes and salt-sensitivity r=0.334, p<0.01. Significant negative correlation was found between salt-sensitivity and PRA. Since no significant correlations were found between BMI and salt-sensitivity, we assume that BMI and salt-sensitivity should be discussed as different independent risk factors for the development of Essential Hypertension. Significant correlation was found between changes in GFR in salt-sensitive cases and controls p<0.01. This can be explained with comparable hyperfiltration of the kidneys at high sodium load and discussed as early sign of hypertensive nephropathy in salt-sensitive individuals. At the high sodium condition Endogenous MBG and OU were high in salt-sensitive subjects compared to salt-resistant. These compounds decreased after low salt diet in salt-sensitive cases as well as controls but remained within the same level in salt-resistant individuals. MBG and OU levels positively correlated with SBP in salt-sensitive individuals but salt-resistant subjects didn't show any changes. Our results support the idea that chronic high sodium loading (>200 mmol) which is typical in traditional Georgian as well as other diets switch those humoral and pathophysiological mechanisms that can lead to the development of certain type of hypertension in salt-sensitive individuals. Salt intake reduction can prevent development of hypertension in salt-sensitive subjects, although hypertension develops in the salt-resistant individuals but by other mechanism such as RAAS.",
"title": ""
}
] |
scidocsrr
|
7e19fb2a24bb449027ff15361e0b5fef
|
Poster : Low-latency blockchain consensus
|
[
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "45d3e3e34b3a6217c59e5196d09774ef",
"text": "While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin’s open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3 f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than Paypal currently handles, with a confirmation latency of 15-20 seconds.",
"title": ""
}
] |
[
{
"docid": "bd5589d700173efdfb38a8cf9f8bbb3a",
"text": "Interior permanent-magnet (IPM) synchronous motors possess special features for adjustable-speed operation which distinguish them from other classes of ac machines. They are robust high powerdensity machines capable of operating at high motor and inverter efficiencies over wide speed ranges, including considerable ranges of constant-power operation. The magnet cost is minimized by the low magnet weight requirements of the IPM design. The impact of the buried-magnet configuration on the motor's electromagnetic characteristics is discussed. The rotor magnetic circuit saliency preferentially increases the quadrature-axis inductance and introduces a reluctance torque term into the IPM motor's torque equation. The electrical excitation requirements for the IPM synchronous motor are also discussed. The control of the sinusoidal phase currents in magnitude and phase angle with respect to the rotor orientation provides a means for achieving smooth responsive torque control. A basic feedforward algorithm for executing this type of current vector torque control is discussed, including the implications of current regulator saturation at high speeds. The key results are illustrated using a combination of simulation and prototype IPM drive measurements.",
"title": ""
},
{
"docid": "fb8454aa118fbb198bc942d9b7e1aa55",
"text": "The design and implementation of high gain 2.4GHz patch antenna array for wireless communications application in rural area is presented. The patch antenna array with high gain is expected to minimize the need of tower that almost requires high cost of construction. In order to achieve high gain, the proposed antenna is constructed by 4×4 rectangular patches fed by microstrip line corporate feeding network which is developed using a λ/4 transformer impedance matching technique. The antenna structure is then deployed on a Flame Retardant (FR) 4 Epoxy dielectric substrate which the thickness and dielectric constant of 1.6mm, and 4.4, respectively. Prior hardware realization, some antenna parameters including return loss, voltage standing wave ratio (VSWR), radiation pattern, and gain are characterized through simulation to obtain an optimum design of antenna. While from the measurement, it shows that the characteristics of realized patch antenna array have good agreements with the design results in which the realized antenna has the measured gain of 15.59dB at the center frequency with the return loss of 19.52dB which corresponds to VSWR of 1.24 and the bandwidth response of 130MHz ranges from the frequency of 2.31GHz-2.44GHz.",
"title": ""
},
{
"docid": "ce33bd2f243e2e8d6bd4202720d82ed8",
"text": "BACKGROUND AND OBJECTIVES\nTo assess the prevalence, etiology, diagnosis of primary and secondary lactose intolerance (LI), including age of onset, among children 1-5 years of age. Suspected/perceived lactose intolerance can lead to dietary restrictions which may increase risk of future health issues.\n\n\nMETHODS AND STUDY DESIGN\nMEDLINE, CAB Abstract, and Embase were searched for articles published from January 1995-June 2015 related to lactose intolerance in young children. Authors independently screened titles/abstracts, full text articles, for eligibility against a priori inclusion/exclusion criteria. Two reviewers extracted data and assessed quality of the included studies.\n\n\nRESULTS\nThe search identified 579 articles; 20 studies, the majority of which were crosssectional, were included in the qualitative synthesis. Few studies reported prevalence of primary LI in children aged 1-5 years; those that did reported a range between 0-17.9%. Prevalence of secondary LI was 0-19%. Hydrogen breath test was the most common method used to diagnose LI. None of the included studies reported age of onset of primary LI.\n\n\nCONCLUSIONS\nThere is limited recent evidence on the prevalence of LI in this age group. The low number of studies and wide range of methodologies used to diagnose LI means that comparison and interpretation, particularly of geographical trends, is compromised. Current understanding appears to rely on data generated in the 1960/70s, with varied qualities of evidence. New, high quality studies are necessary to understand the true prevalence of LI. This review is registered with the International Prospective Register for Systematic Reviews (PROSPERO).",
"title": ""
},
{
"docid": "3fb6cec95fcaa0f8b6c6e4f649591b35",
"text": "This paper presents the performance of DSP, image and 3D applications on recent general-purpose microprocessors using streaming SIMD ISA extensions (integer and oating point). The 9 benchmarks benchmark we use for this evaluation have been optimized for DLP and caches use with SIMD extensions and data prefetch. The result of these cumulated optimizations is a speedup that ranges from 1.9 to 7.1. All the benchmarks were originaly computation bound and 7 becomes memory bandwidth bound with the addition of SIMD and data prefetch. Quadrupling the memory bandwidth has no eeect on original kernels but improves the performance of SIMD kernels by 15-55%.",
"title": ""
},
{
"docid": "b962762c107d80fce5e87c3506877999",
"text": "Mobile off-line payment enables purchase over the counter even in the absence of reliable network connections. Popular solutions proposed by leading payment service providers (e.g., Google, Amazon, Samsung, Apple) rely on direct communication between the payer’s device and the POS system, through Near-Field Communication (NFC), Magnetic Secure Transaction (MST), audio and QR code. Although pre-cautions have been taken to protect the payment transactions through these channels, their security implications are less understood, particularly in the presence of unique threats to this new e-commerce service. In the paper, we report a new type of over-the-counter payment frauds on mobile off-line payment, which exploit the designs of existing schemes that apparently fail to consider the adversary capable of actively affecting the payment process. Our attack, called Synchronized Token Lifting and Spending (STLS), demonstrates that an active attacker can sniff the payment token, halt the ongoing transaction through various means and transmit the token quickly to a colluder to spend it in a different transaction while the token is still valid. Our research shows that such STLS attacks pose a realistic threat to popular offline payment schemes, particularly those meant to be backwardly compatible, like Samsung Pay and AliPay. To mitigate the newly discovered threats, we propose a new solution called POSAUTH. One fundamental cause of the STLS risk is the nature of the communication channels used by the vulnerable mobile off-line payment schemes, which are easy to sniff and jam, and more importantly, unable to support a secure mutual challenge-response protocols since information can only be transmitted in one-way. POSAUTH addresses this issue by incorporating one unique ID of the current POS terminal into the generation of payment tokens by requiring a quick scan∗The two lead authors are ordered alphabetically. †Corresponding author. ning of QR code printed on the POS terminal. When combined with a short valid period, POSAUTH can ensure that tokens generated for one transaction can only be used in that transaction.",
"title": ""
},
{
"docid": "d786d4cb7b57885bc0bb2c2bfd892336",
"text": "Problem statement: Clustering is one of the most important research ar eas in the field of data mining. Clustering means creating groups of ob jects based on their features in such a way that th e objects belonging to the same groups are similar an d those belonging to different groups are dissimila r. Clustering is an unsupervised learning technique. T he main advantage of clustering is that interesting patterns and structures can be found directly from very large data sets with little or none of the background knowledge. Clustering algorithms can be applied in many domains. Approach: In this research, the most representative algorithms K-Mean s and K-Medoids were examined and analyzed based on their basic approach. The best algorithm i n each category was found out based on their performance. The input data points are generated by two ways, one by using normal distribution and another by applying uniform distribution. Results: The randomly distributed data points were taken as input to these algorithms and clusters are found ou t for each algorithm. The algorithms were implement ed using JAVA language and the performance was analyze d based on their clustering quality. The execution time for the algorithms in each category was compar ed for different runs. The accuracy of the algorith m was investigated during different execution of the program on the input data points. Conclusion: The average time taken by K-Means algorithm is greater than the time taken by K-Medoids algorithm for both the case of normal and uniform distributions. The r esults proved to be satisfactory.",
"title": ""
},
{
"docid": "3584c223604a80fd8ce5686482cac913",
"text": "In this paper, planar beam scanning substrate integrated waveguide (SIW) slot leaky-wave antennas (LWAs) are proposed for gain enhancement using a metallic phase correction grating cover. Unlike conventional Fabry-Pérot (FP) cavity antennas, the proposed antenna is fed by the SIW beam scanning LWA instead of a feeding antenna. The beam scanning angle range is enlarged by meandering the entire feeding structure and the gain is enhanced by a metallic grating cover acting as a 1-D lens. Two kinds of the gratings with different metal strip parameters are designed and analyzed. The proposed antennas operating at the center frequency of 25.45 GHz are designed and experimentally verified for an automotive collision avoidance radar with a gain enhancement of about 4 ~ 6 dB. The proposed SIW LWAs have the advantages of high gain, low profile, easy fabrication and beam-scanning capability good for millimeter wave radar applications.",
"title": ""
},
{
"docid": "dc3417d01a998ee476aeafc0e9d11c74",
"text": "We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. 1. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures (section 3.1). 2. Model sizes can be reduced by a factor of 4 by quantizing weights to 8bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights (section 3.1). 3. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX (section 6). 4. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks (section 3.2). 5. We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks (Section 3). 6. We review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations (section 4). 7. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits (section 7).",
"title": ""
},
{
"docid": "912662943bd9d74550773594b2ac1299",
"text": "PACON 2005 Harmonization of Port and Industry ABSTRACTS TABLE OF CONTENTSS TABLE OF CONTENTS",
"title": ""
},
{
"docid": "28a4fd94ba02c70d6781ae38bf35ca5a",
"text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "053b069a59b938c183c19e2938f89e66",
"text": "This paper examines the role and value of information security awareness efforts in defending against social engineering attacks. It categories the different social engineering threats and tactics used in targeting employees and the approaches to defend against such attacks. While we review these techniques, we attempt to develop a thorough understanding of human security threats, with a suitable balance between structured improvements to defend human weaknesses, and efficiently focused security training and awareness building. Finally, the paper shows that a multi-layered shield can mitigate various security risks and minimize the damage to systems and data.",
"title": ""
},
{
"docid": "e1485bddbab0c3fa952d045697ff2112",
"text": "The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.",
"title": ""
},
{
"docid": "c1af803b9f2bcbcb2da16599e469fb24",
"text": "BACKGROUND\nWith the introduction of ICD-10 throughout Canada, it is important to ensure that Acute Myocardial Infarction (AMI) comorbidities employed in risk adjustment methods remain valid and robust. Therefore, we developed ICD-10 coding algorithms for nine AMI comorbidities, examined the validity of the ICD-10 and ICD-9 coding algorithms in detection of these comorbidities, and assessed their performance in predicting mortality. The nine comorbidities that we examined were shock, diabetes with complications, congestive heart failure, cancer, cerebrovascular disease, pulmonary edema, acute renal failure, chronic renal failure, and cardiac dysrhythmias.\n\n\nMETHODS\nCoders generated a comprehensive list of ICD-10 codes corresponding to each AMI comorbidity. Physicians independently reviewed and determined the clinical relevance of each item on the list. To ensure that the newly developed ICD-10 coding algorithms were valid in recording comorbidities, medical charts were reviewed. After assessing ICD-10 algorithms' validity, both ICD-10 and ICD-9 algorithms were applied to a Canadian provincial hospital discharge database to predict in-hospital, 30-day, and 1-year mortality.\n\n\nRESULTS\nCompared to chart review data as a 'criterion standard', ICD-9 and ICD-10 data had similar sensitivities (ranging from 7.1-100%), and specificities (above 93.6%) for each of the nine AMI comorbidities studied. The frequencies for the comorbidities were similar between ICD-9 and ICD-10 coding algorithms for 49,861 AMI patients in a Canadian province during 1994-2004. The C-statistics for predicting 30-day and 1 year mortality were the same for ICD-9 (0.82) and for ICD-10 data (0.81).\n\n\nCONCLUSION\nThe ICD-10 coding algorithms developed in this study to define AMI comorbidities performed similarly as past ICD-9 coding algorithms in detecting conditions and risk-adjustment in our sample. However, the ICD-10 coding algorithms should be further validated in external databases.",
"title": ""
},
{
"docid": "7aaa535e1294e9bcce7d0d40caff626e",
"text": "Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entitytype consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6% gain in trigger (event) identification, and more than 11.8% gain for argument (role) classification in ACE event extraction.",
"title": ""
},
{
"docid": "a144b5969c30808f0314218248c48ed6",
"text": "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.",
"title": ""
},
{
"docid": "a1d6a739b10ec93229c33e0a8607e75e",
"text": "We present and discuss the important business problem of estimating the effect of retention efforts on the Lifetime Value of a customer in the Telecommunications industry. We discuss the components of this problem, in particular customer value and length of service (or tenure) modeling, and present a novel segment-based approach, motivated by the segment-level view marketing analysts usually employ. We then describe how we build on this approach to estimate the effects of retention on Lifetime Value. Our solution has been successfully implemented in Amdocs' Business Insight (BI) platform, and we illustrate its usefulness in real-world scenarios.",
"title": ""
},
{
"docid": "019c2d5927e54ae8ce3fc7c5b8cff091",
"text": "In this paper, we present Affivir, a video browsing system that recommends Internet videos that match a user’s affective preference. Affivir models a user’s watching behavior as sessions, and dynamically adjusts session parameters to cater to the user’s current mood. In each session, Affivir discovers a user’s affective preference through user interactions, such as watching or skipping videos. Affivir uses video affective features (motion, shot change rate, sound energy, and audio pitch average) to retrieve videos that have similar affective responses. To efficiently search videos of interest from our video repository, all videos in the repository are pre-processed and clustered. Our experimental results shows that Affivir has made a significant improvement in user satisfaction and enjoyment, compared with several other popular baseline approaches.",
"title": ""
},
{
"docid": "50d397416652309e2c371aaeb53dc1da",
"text": "In conventional energy storage systems using series-connected energy storage cells such as lithium-ion battery cells and supercapacitors (SCs), an interface bidirectional converter and cell voltage equalizer are separately required to manage charging/discharging and ensure years of safe operation. In this paper, a bidirectional PWM converter integrating cell voltage equalizer is proposed. This proposed integrated converter can be derived by combining a traditional bidirectional PWM converter and series-resonant voltage multiplier (SRVM) that functionally operates as an equalizer and is driven by asymmetric square wave voltage generated at the switching node of the converter. The converter and equalizer can be integrated into a single unit without increasing the switch count, achieving not only system-level but also circuit-level simplifications. Open-loop control is feasible for the SRVM when operated in discontinuous conduction mode, meaning the proposed integrated converter can operate similarly to conventional bidirectional converters. An experimental charge-discharge cycling test for six SCs connected in series was performed using the proposed integrated converter. The cell voltage imbalance was gradually eliminated by the SRVM while series-connected SCs were cycled by the bidirectional converter. All the cell voltages were eventually unified, demonstrating the integrated functions of the proposed converter.",
"title": ""
},
{
"docid": "5e9a0d990a3b4fb075552346a11986c4",
"text": "The TinyTeRP is a centimeter-scale, modular wheeled robotic platform developed for the study of swarming or collective behavior. This paper presents the use of TinyTeRPs to implement collective recruitment and rendezvous to a fixed location using several RSSI-based gradient ascent algorithms. We also present a redesign of the wheelbased module with tank treads and a wider base, improving the robot’s mobility over uneven terrain and overall robustness. Lastly, we present improvements to the open source C libraries that allow users to easily implement high-level functions and closed-loop control on the TinyTeRP.",
"title": ""
}
] |
scidocsrr
|
830d0d4e7909a7b8a45dc3952286281e
|
Recent Advances in Cloud Radio Access Networks: System Architectures, Key Techniques, and Open Issues
|
[
{
"docid": "b12947614198d639aef0d3a26b83a215",
"text": "In the era of mobile Internet, mobile operators are facing pressure on ever-increasing capital expenditures and operating expenses with much less growth of income. Cloud Radio Access Network (C-RAN) is expected to be a candidate of next generation access network techniques that can solve operators' puzzle. In this article, on the basis of a general survey of C-RAN, we present a novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane. Compared to traditional architecture, the proposed C-RAN architecture emphasizes the notion of service cloud, service-oriented resource scheduling and management, thus it facilitates the utilization of new communication and computer techniques. With the extensive computation resource offered by the cloud platform, a coordinated user scheduling algorithm and parallel optimum precoding scheme are proposed, which can achieve better performance. The proposed scheme opens another door to design new algorithms matching well with C-RAN architecture, instead of only migrating existing algorithms from traditional architecture to C-RAN.",
"title": ""
}
] |
[
{
"docid": "f051a9937aa9e48524a75c24ec496526",
"text": "A new voltage-programmed pixel circuit using hydrogenated amorphous silicon (a-Si:H) thin-film transistors (TFTs) for active-matrix organic light-emitting diodes (AMOLEDs) is presented. In addition to compensating for the shift in threshold voltage of TFTs, the circuit is capable of compensating for OLED luminance degradation by employing the shift in OLED voltage as a feedback of OLED degradation",
"title": ""
},
{
"docid": "25efced5063ca8c9e842c79a8d3ab073",
"text": "The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong type of encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is devised to automatically generate test inputs. We also propose a vulnerability repair technique that can automatically fix detected vulnerabilities in many situations. Evaluation of this approach has been conducted on an open source medical record application with over 200 web pages written in JSP.",
"title": ""
},
{
"docid": "9b9a04a859b51866930b3fb4d93653b6",
"text": "BACKGROUND\nResults of several studies have suggested a probable etiologic association between Epstein-Barr virus (EBV) and leukemias; therefore, the aim of this study was to investigate the association of EBV in childhood leukemia.\n\n\nMETHODS\nA direct isothermal amplification method was developed for detection of the latent membrane protein 1 (LMP1) of EBV in the peripheral blood of 80 patients with leukemia (54 had lymphoid leukemia and 26 had myeloid leukemia) and of 20 hematologically healthy control subjects.\n\n\nRESULTS\nEBV LMP1 gene transcripts were found in 29 (36.3%) of the 80 patients with leukemia but in none of the healthy controls (P < .0001). Of the 29 EBV(+) cases, 23 (79.3%), 5 (17.3%), and 1 (3.4%) were acute lymphoblastic leukemia, acute myeloid leukemia, and chronic myeloid leukemia, respectively.\n\n\nCONCLUSION\nEBV LMP1 gene transcriptional activity was observed in a significant proportion of patients with acute lymphoblastic leukemia. EBV infection in patients with lymphoid leukemia may be a factor involved in the high incidence of pediatric leukemia in the Sudan.",
"title": ""
},
{
"docid": "5647833ac7018aa53971791ac59ec5c9",
"text": "Location estimation using the global system for mobile communication (GSM) is an emerging application that infers the location of the mobile receiver from multiple signals measurements. While geometrical and signal propagation models have been deployed to tackle this estimation problem, the terrain factors and power fluctuations have confined the accuracy of such estimation. Using support vector regression, we investigate the missing value location estimation problem by providing theoretical and empirical analysis on existing and novel kernels. A novel synthetic experiment is designed to compare the performances of different location estimation approaches. The proposed support vector regression approach shows promising performances, especially in terrains with local variations in environmental factors",
"title": ""
},
{
"docid": "77059bf4b66792b4f34bc78bbb0b373a",
"text": "Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.",
"title": ""
},
{
"docid": "ebbb7b73c8c212a310bd0378f2ce39aa",
"text": "\"Nail clipping is a simple technique for diagnosis of several nail unit dermatoses. This article summarizes the practical approach, utility, and histologic findings of a nail clipping in evaluation of onychomycosis, nail unit psoriasis, onychomatricoma, subungual hematoma, melanonychia, and nail cosmetics, and the forensic applications of this easily obtained specimen. It reviews important considerations in optimizing specimen collection, processing methods, and efficacy of special stains in several clinical contexts. Readers will develop a greater understanding and ease of application of this indispensable procedure in assessing nail unit dermatoses.\"",
"title": ""
},
{
"docid": "a1623a10e06537a038ce3eaa1cfbeed7",
"text": "We present a simple zero-knowledge proof of knowledge protocol of which many protocols in the literature are instantiations. These include Schnorr’s protocol for proving knowledge of a discrete logarithm, the Fiat-Shamir and Guillou-Quisquater protocols for proving knowledge of a modular root, protocols for proving knowledge of representations (like Okamoto’s protocol), protocols for proving equality of secret values, a protocol for proving the correctness of a Diffie-Hellman key, protocols for proving the multiplicative relation of three commitments (as required in secure multi-party computation), and protocols used in credential systems. This shows that a single simple treatment (and proof), at a high level of abstraction, can replace the individual previous treatments. Moreover, one can devise new instantiations of the protocol.",
"title": ""
},
{
"docid": "5c3ae59522d549bed4c059a11b9724c6",
"text": "The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19-CCL21-CCR7-ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines.",
"title": ""
},
{
"docid": "f0ff6b076e5ea1ee8ec4d00fa3b92f16",
"text": "Sleep quality has significant effects on cognitive performance and is influenced by multiple factors such as stress. Contrary to the ideal, medical students and residents suffer from sleep deprivation and stress at times when they should achieve the greatest amount of learning. In order to examine the relationship between sleep quality and academic performance, 144 medical students undertaking the pre-clinical board exam answered a survey regarding their subjective sleep quality (Pittsburgh sleep quality index, PSQI), grades and subjective stress for three different time points: semester, pre- and post-exam. Academic performance correlated with stress and sleep quality pre-exam (r = 0.276, p < 0.001 and r = 0.158, p < 0.03, note that low performance meant low sleep quality and high stress), however not with the stress or sleep quality during the semester and post-exam. 59% of all participants exhibited clinically relevant sleep disturbances (PSQI > 5) during exam preparation compared to 29% during the semester and 8% post-exam. This study shows that in medical students it is not the generally poor sleepers, who perform worse in the medical board exams. Instead students who will perform worse on their exams seem to be more stressed and suffer from poor sleep quality. However, poor sleep quality may negatively impact test performance as well, creating a vicious circle. Furthermore, the rate of sleep disturbances in medical students should be cause for intervention.",
"title": ""
},
{
"docid": "6e2933c095c4f077928f5389969abfb8",
"text": "Eventually consistent storage systems give up the ACID semantics of conventional databases in order to gain better scalability, higher availability, and lower latency. A side-effect of this design decision is that application developers must deal with stale or out of order data. As a result, substantial intellectual effort has been devoted to studying the behavior of eventually consistent systems, in particular finding quantitative answers to the questions \"how eventual\" and \"how consistent\"?",
"title": ""
},
{
"docid": "0aff3b047f483216e02644f130fb8151",
"text": "Blockchain methods are emerging as practical tools for validation, record-keeping, and access control in addition to their early applications in cryptocurrency. This column explores the options for use of blockchains to enhance security, trust, and compliance in a variety of industry settings and explores the current state of blockchain standards.",
"title": ""
},
{
"docid": "ac4683be3ffc119f6eb64c4f295ffe2d",
"text": "As data rates in electrical links rise to 56Gb/s, standards are gravitating towards PAM-4 modulation to achieve higher spectral efficiency. Such approaches are not without drawbacks, as PAM-4 signaling results in reduced vertical margins as compared to NRZ. This makes data recovery more susceptible to residual, or uncompensated, intersymbol interference (ISI) when the PAM-4 waveform is sampled by the receiver. To overcome this, existing standards such as OIF CEI 56Gb/s very short reach (VSR) require forward error correction to meet the target link BER of 1E-15. This comes at the expense of higher latency, which is undesirable for chip-to-chip VSR links in compute applications. Therefore, different channel equalization strategies should be considered for PAM-4 electrical links. Employing ½-UI (T/2) tap delays in an FFE extends the filter bandwidth as compared to baud- or T-spaced taps [1], resulting in improved timing margins and lower residual ISI for 56Gb/s PAM-4 data sent across VSR channels. While T/2-spaced FFEs have been reported in optical receivers for dispersion compensation [2], the analog delay techniques used are not conducive to designing dense I/O and cannot support a wide range of data rates. This work demonstrates a 56Gb/s PAM-4 transmitter with a T/2-spaced FFE using high-speed clocking techniques to produce well-controlled tap delays that are data-rate agile. The transmitter also supports T-spaced tap delays, ensuring compatibility with existing standards.",
"title": ""
},
{
"docid": "d9ce90aa11c47e08c10f3a0666521b51",
"text": "Static scheduling of a program represented by a directed task graph on a multiprocessor system to minimize the program completion time is a well-known problem in parallel processing. Since finding an optimal schedule is an NP-complete problem in general, researchers have resorted to devising efficient heuristics. A plethora of heuristics have been proposed based on a wide spectrum of techniques, including branch-and-bound, integer-programming, searching, graph-theory, randomization, genetic algorithms, and evolutionary methods. The objective of this survey is to describe various scheduling algorithms and their functionalities in a contrasting fashion as well as examine their relative merits in terms of performance and time-complexity. Since these algorithms are based on diverse assumptions, they differ in their functionalities, and hence are difficult to describe in a unified context. We propose a taxonomy that classifies these algorithms into different categories. We consider 27 scheduling algorithms, with each algorithm explained through an easy-to-understand description followed by an illustrative example to demonstrate its operation. We also outline some of the novel and promising optimization approaches and current research trends in the area. Finally, we give an overview of the software tools that provide scheduling/mapping functionalities.",
"title": ""
},
{
"docid": "021789cea259697f236986028218e3f6",
"text": "In the IT world of corporate networking, how businesses store and compute data is starting to shift from in-house servers to the cloud. However, some enterprises are still hesitant to make this leap to the cloud because of their information security and data privacy concerns. Enterprises that want to invest into this service need to feel confident that the information stored on the cloud is secure. Due to this need for confidence, trust is one of the major qualities that cloud service providers (CSPs) must build for cloud service users (CSUs). To do this, a model that all CSPs can follow must exist to establish a trust standard in the industry. If no concrete model exists, the future of cloud computing will be stagnant. This paper presents a new trust model that involves all the cloud stakeholders such as CSU, CSP, and third-party auditors. Our proposed trust model is objective since it involves third-party auditors to develop unbiased trust between the CSUs and the CSPs. Furthermore, to support the implementation of the proposed trust model, we rank CSPs according to the trust-values obtained from the trust model. The final score for each participating CSP will be determined based on the third-party assessment and the feedback received from the CSUs.",
"title": ""
},
{
"docid": "c2baa873bc2850b14b3868cdd164019f",
"text": "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.",
"title": ""
},
{
"docid": "efd1e2aa69306bde416065547585813b",
"text": "Numerous approaches based on metrics, token sequence pattern-matching, abstract syntax tree (AST) or program dependency graph (PDG) analysis have already been proposed to highlight similarities in source code: in this paper we present a simple and scalable architecture based on AST fingerprinting. Thanks to a study of several hashing strategies reducing false-positive collisions, we propose a framework that efficiently indexes AST representations in a database, that quickly detects exact (w.r.t source code abstraction) clone clusters and that easily retrieves their corresponding ASTs. Our aim is to allow further processing of neighboring exact matches in order to identify the larger approximate matches, dealing with the common modification patterns seen in the intra-project copy-pastes and in the plagiarism cases.",
"title": ""
},
{
"docid": "28c19bf17c76a6517b5a7834216cd44d",
"text": "The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.",
"title": ""
},
{
"docid": "35f439b86c07f426fd127823a45ffacf",
"text": "The paper concentrates on the fundamental coordination problem that requires a network of agents to achieve a specific but arbitrary formation shape. A new technique based on complex Laplacian is introduced to address the problems of which formation shapes specified by inter-agent relative positions can be formed and how they can be achieved with distributed control ensuring global stability. Concerning the first question, we show that all similar formations subject to only shape constraints are those that lie in the null space of a complex Laplacian satisfying certain rank condition and that a formation shape can be realized almost surely if and only if the graph modeling the inter-agent specification of the formation shape is 2-rooted. Concerning the second question, a distributed and linear control law is developed based on the complex Laplacian specifying the target formation shape, and provable existence conditions of stabilizing gains to assign the eigenvalues of the closed-loop system at desired locations are given. Moreover, we show how the formation shape control law is extended to achieve a rigid formation if a subset of knowledgable agents knowing the desired formation size scales the formation while the rest agents do not need to re-design and change their control laws.",
"title": ""
},
{
"docid": "ebb09cb9641ec9935ea7dae77e8fd636",
"text": "The aim of this study was a detailed clinicopathological investigation of sinonasal NUT midline carcinoma (NMC), including analysis of DNA methylation and microRNA (miRNA) expression. Three (5%) cases of NMC were detected among 56 sinonasal carcinomas using immunohistochemical screening and confirmed by fluorescence in situ hybridization. The series comprised 2 males and 1 female, aged 46, 60, and 65 years. Two tumors arose in the nasal cavity and one in the maxillary sinus. The neoplasms were staged pT1, pT3, and pT4a (all cN0M0). All patients were treated by radical resection with adjuvant radiotherapy. Two patients died 3 and 8 months after operation, but one patient (pT1 stage; R0 resection) experienced no evidence of disease at 108 months. Microscopically, all tumors consisted of infiltrating nests of polygonal cells with vesicular nuclei, prominent nucleoli and basophilic cytoplasm. Abrupt keratinization was present in only one case. Immunohistochemically, there was a diffuse expression of cytokeratin (CK) cocktail, CK7, p40, p63, and SMARCB1/INI1. All NMCs tested negative for EBV and HPV infection. Two NMCs showed methylation of RASSF1 gene. All other genes (APC, ATM, BRCA1, BRCA2, CADM1, CASP8, CD44, CDH13, CDKN1B, CDKN2A, CDKN2B, CHFR, DAPK1, ESR1, FHIT, GSTP1, HIC1, KLLN, MLH1a, MLH1b, RARB, TIMP3, and VHL) were unmethylated. All NMCs showed upregulation of miR-9 and downregulation of miR-99a and miR-145 and two cases featured also upregulation of miR-21, miR-143, and miR-484. In summary, we described three cases of sinonasal NMCs with novel findings on DNA methylation and miRNA expression, which might be important for new therapeutic strategies in the future.",
"title": ""
},
{
"docid": "8f4af2b22c53c2b640a885c4645f08de",
"text": "Pose variation is one key challenge in face recognition. As opposed to current techniques for pose invariant face recognition, which either directly extract pose invariant features for recognition, or first normalize profile face images to frontal pose before feature extraction, we argue that it is more desirable to perform both tasks jointly to allow them to benefit from each other. To this end, we propose a Pose Invariant Model (PIM) for face recognition in the wild, with three distinct novelties. First, PIM is a novel and unified deep architecture, containing a Face Frontalization sub-Net (FFN) and a Discriminative Learning sub-Net (DLN), which are jointly learned from end to end. Second, FFN is a well-designed dual-path Generative Adversarial Network (GAN) which simultaneously perceives global structures and local details, incorporated with an unsupervised cross-domain adversarial training and a \"learning to learn\" strategy for high-fidelity and identity-preserving frontal view synthesis. Third, DLN is a generic Convolutional Neural Network (CNN) for face recognition with our enforced cross-entropy optimization strategy for learning discriminative yet generalized feature representation. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts.",
"title": ""
}
] |
scidocsrr
|
7200ac6f8834e5334e8c87365e448a70
|
Transcranial Doppler Ultrasound: A Review of the Physical Principles and Major Applications in Critical Care
|
[
{
"docid": "9d99851970492cc4e8f6ac54967a5229",
"text": "BACKGROUND AND PURPOSE\nTranscranial Doppler (TCD) is used for diagnosis of vasospasm in patients with subarachnoid hemorrhage due to a ruptured aneurysm. Our aim was to evaluate both the accuracy of TCD compared with angiography and its usefulness as a screening method in this setting.\n\n\nMETHODS\nA search (MEDLINE, EMBASE, Cochrane Library, bibliographies, hand searching, any language, through January 31, 2001) was performed for studies comparing TCD with angiography. Data were critically appraised using a modified published 10-point score and were combined using a random-effects model.\n\n\nRESULTS\nTwenty-six reports compared TCD with angiography. Median validity score was 4.5 (range 1 to 8). Meta-analyses could be performed with data from 7 trials. For the middle cerebral artery (5 trials, 317 tests), sensitivity was 67% (95% CI 48% to 87%), specificity was 99% (98% to 100%), positive predictive value (PPV) was 97% (95% to 98%), and negative predictive value (NPV) was 78% (65% to 91%). For the anterior cerebral artery (3 trials, 171 tests), sensitivity was 42% (11% to 72%), specificity was 76% (53% to 100%), PPV was 56% (27% to 84%), and NPV was 69% (43% to 95%). Three of these 7 studies reported on the same patients, each on another artery, and for 4, data recycling could not be disproved. Other arteries were tested in only 1 trial each.\n\n\nCONCLUSIONS\nFor the middle cerebral artery, TCD is not likely to indicate a spasm when angiography does not show one (high specificity), and TCD may be used to identify patients with a spasm (high PPV). For all other situations and arteries, there is either lack of evidence of accuracy or of any usefulness of TCD. Most of these data are of low methodological quality, bias cannot not be ruled out, and data reporting is often uncritical.",
"title": ""
}
] |
[
{
"docid": "61f434d5c0b693dd779659106ea35cd4",
"text": "Malignant melanoma has one of the most rapidly increasing incidences in the world and has a considerable mortality rate. Early diagnosis is particularly important since melanoma can be cured with prompt excision. Dermoscopy images play an important role in the non-invasive early detection of melanoma [1]. However, melanoma detection using human vision alone can be subjective, inaccurate and poorly reproducible even among experienced dermatologists. This is attributed to the challenges in interpreting images with diverse characteristics including lesions of varying sizes and shapes, lesions that may have fuzzy boundaries, different skin colors and the presence of hair [2]. Therefore, the automatic analysis of dermoscopy images is a valuable aid for clinical decision making and for image-based diagnosis to identify diseases such as melanoma [1-4].",
"title": ""
},
{
"docid": "618399656be5063862ea28b9dc4bceb3",
"text": "In RF receivers, many interferers accompany the desired signal in its entrance to the front end of the receiver. Of particular interest out of these interferes is the image, which if downconverted with the desired signal by the same local oscillator, severely corrupts the received information. In low IF receivers, the image can only be eliminated after the downconversion by several means, one of which is using a polyphase filter. This paper discusses available current solutions and then introduces a new low power high dynamic range polyphase filter built with the commercially available Plus-Type Second Generation Current Conveyors (CCII+). SPICE simulation results show that image rejection of 15dBs can be achieved using only one stage of this filter.",
"title": ""
},
{
"docid": "dd080a0ad38076c2693d6bcef574b053",
"text": "We present an approach to detect network configuration errors, which combines the benefits of two prior approaches. Like prior techniques that analyze configuration files, our approach can find errors proactively, before the configuration is applied, and answer “what if” questions. Like prior techniques that analyze data-plane snapshots, our approach can check a broad range of forwarding properties and produce actual packets that violate checked properties. We accomplish this combination by faithfully deriving and then analyzing the data plane that would emerge from the configuration. Our derivation of the data plane is fully declarative, employing a set of logical relations that represent the control plane, the data plane, and their relationship. Operators can query these relations to understand identified errors and their provenance. We use our approach to analyze two large university networks with qualitatively different routing designs and find many misconfigurations in each. Operators have confirmed the majority of these as errors and have fixed their configurations accordingly.",
"title": ""
},
{
"docid": "afe5c88202f6cb09d9ae2fe480d29201",
"text": "102 unaccompanied asylum-seeking adolescents showed a high prevalence of infections (58.8 %; 20 % with parasites), iron deficiency anemia (17.6 %), and a very low prevalence of non-communicable diseases (<2.0 %) [3]. The vast majority of exile children are traveling with their parents. Stress levels of parents have significantly increased which in turn negatively impacts children that are now significantly more traumatized. The number of unattended minors is rapidly rising, for instance in Germany the total number of unaccompanied minors in 2015 exceeded 60,000 [1]. Unaccompanied minors are at higher risk to develop mental health problems. Unfortunately, a significant proportion of them may be at risk of trafficking, abuse or neglect. In the above study, mental illness was present in 13.7 % of adolescents and females were more frequently affected [3]. Studies showed higher rates of depression, post-traumatic stress disorder and anxiety disorders among war refugees [4, 5]. A recent Sweden survey pointed out that neurotic disorders are more common in the unaccompanied refugees [6]. A Greek study showed an increase in the incidents of deaths of unaccompanied minors during 2014–2015 compared to previous years 2011–2013; more than 70 children have drowned trying to rich a Greek island and majority of others are often soaking wet, freezing, suffering from hypothermia due to a winter conditions [7].",
"title": ""
},
{
"docid": "e57168f624200cdfd6798cfd42ecce23",
"text": "Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a well-known example of this modeling philosophy. Here a temporal algorithm is transferred into an architectural framework such that the learning can be performed by an extension of standard error backpropagation. We introduce 12 tricks that not only provide deeper insights in the functioning of RNNs but also improve the identification of underlying dynamical system from data.",
"title": ""
},
{
"docid": "53a05c0438a0a26c8e3e74e1fa7b192b",
"text": "This paper presents a simple method based on sinusoidal-amplitude detector for realizing the resolver-signal demodulator. The proposed demodulator consists of two full-wave rectifiers, two ±unity-gain amplifiers, and two sinusoidal-amplitude detectors with control switches. Two output voltages are proportional to sine and cosine envelopes of resolver-shaft angle without low-pass filter. Experimental results demonstrating characteristic of the proposed circuit are included.",
"title": ""
},
{
"docid": "ebd72a597dba9a41dba5f3f0b4d1e6b9",
"text": "One may consider that drug-drug interactions (DDIs) associated with antacids is an obsolete topic because they are prescribed less frequently by medical professionals due to the advent of drugs that more effectively suppress gastric acidity (i.e. histamine H2-receptor antagonists [H2RAs] and proton pump inhibitors [PPIs]). Nevertheless, the use of antacids by ambulant patients may be ever increasing, because they are freely available as over-the-counter (OTC) drugs. Antacids consisting of weak basic substances coupled with polyvalent cations may alter the rate and/or the extent of absorption of concomitantly administered drugs via different mechanisms. Polyvalent cations in antacid formulations may form insoluble chelate complexes with drugs and substantially reduce their bioavailability. Clinical studies demonstrated that two classes of antibacterial s (tetracyclines and fluoroquinolones) are susceptible to clinically relevant DDIs with antacids through this mechanism. Countermeasures against this type of DDI include spacing out the dosing interval —taking antacid either 4 hours before or 2 hours after administration of these antibacterials. Bisphosphonates may be susceptible to DDIs with antacids by the same mechanism, as described in the prescription information of most bisphosphonates, but no quantitative data about the DDIs are available. For drugs with solubility critically dependent on pH, neutralization of gastric fluid by antacids may alter the dissolution of these drugs and the rate and/or extent of their absorption. However, the magnitude of DDIs elicited by antacids through this mechanism is less than that produced by H2RAs or PPIs; therefore, the clinical relevance of such DDIs is often obscure. Magnesium ions contained in some antacid formulas may increase gastric emptying, thereby accelerating the rate of absorption of some drugs. However, the clinical relevance of this is unclear in most cases because the difference in plasma drug concentration observed after dosing shortly disappears. Recent reports have indicated that some of the molecular-targeting agents such as the tyrosine kinase inhibitors dasatinib and imatinib, and the thrombopoietin receptor agonist eltrombopag may be susceptible to DDIs with antacids. Finally, the recent trend of developing OTC drugs as combination formulations of an antacid and an H2RA is a concern because these drugs will increase the risk of DDIs by dual mechanisms, i.e. a gastric pH-dependent mechanism by H2RAs and a cation-mediated chelation mechanism by antacids.",
"title": ""
},
{
"docid": "06ab903f3de4c498e1977d7d0257f8f3",
"text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.",
"title": ""
},
{
"docid": "5cc7f7aae87d95ea38c2e5a0421e0050",
"text": "Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work after integration of all parts; to reduce the dependencies between the parts of system; and to prevent the duplication of parts in the system. [Qurashi SA, Qureshi MRJ. Scrum of Scrums Solution for Large Size Teams Using Scrum Methodology. Life Sci J 2014;11(8):443-449]. (ISSN:1097-8135). http://www.lifesciencesite.com. 58",
"title": ""
},
{
"docid": "d89f10b6df65f5a40bc33cac064e3cdd",
"text": "In this paper we provide empirical evidence that using humanlike gaze cues during human-robot handovers can improve the timing and perceived quality of the handover event. Handovers serve as the foundation of many human-robot tasks. Fluent, legible handover interactions require appropriate nonverbal cues to signal handover intent, location and timing. Inspired by observations of human-human handovers, we implemented gaze behaviors on a PR2 humanoid robot. The robot handed over water bottles to a total of 102 naïve subjects while varying its gaze behaviour: no gaze, gaze designed to elicit shared attention at the handover location, and the shared attention gaze complemented with a turn-taking cue. We compared subject perception of and reaction time to the robot-initiated handovers across the three gaze conditions. Results indicate that subjects reach for the offered object significantly earlier when a robot provides a shared attention gaze cue during a handover. We also observed a statistical trend of subjects preferring handovers with turn-taking gaze cues over the other conditions. Our work demonstrates that gaze can play a key role in improving user experience of human-robot handovers, and help make handovers fast and fluent.",
"title": ""
},
{
"docid": "38fa8db9d32fd8cf7d43a9db62f7b8e1",
"text": "In this paper will be presented a methodology for implementation of the Line Impedance Stabilization Network (LISN), of commutable symmetric kind, as the specified on Standard IEC CISPR 16-1, using low cost easily acquirable components in the electro-electronic business. The Line Impedance Stabilization Network is used for conducted EMI tests in equipment’s witch current is not above 16 A.",
"title": ""
},
{
"docid": "dcf24411ffed0d5bf2709e005f6db753",
"text": "Dynamic Causal Modelling (DCM) is an approach first introduced for the analysis of functional magnetic resonance imaging (fMRI) to quantify effective connectivity between brain areas. Recently, this framework has been extended and established in the magneto/encephalography (M/EEG) domain. DCM for M/EEG entails the inversion a full spatiotemporal model of evoked responses, over multiple conditions. This model rests on a biophysical and neurobiological generative model for electrophysiological data. A generative model is a prescription of how data are generated. The inversion of a DCM provides conditional densities on the model parameters and, indeed on the model itself. These densities enable one to answer key questions about the underlying system. A DCM comprises two parts; one part describes the dynamics within and among neuronal sources, and the second describes how source dynamics generate data in the sensors, using the lead-field. The parameters of this spatiotemporal model are estimated using a single (iterative) Bayesian procedure. In this paper, we will motivate and describe the current DCM framework. Two examples show how the approach can be applied to M/EEG experiments.",
"title": ""
},
{
"docid": "04652a4dd33641fd6ec5eccc1c5d07fa",
"text": "This paper describes a supervised learning algorithm which optimizes a feature representation for temporally constrained clustering. The proposed method is applied to music segmentation, in which a song is partitioned into functional or locally homogeneous segments (e.g., verse or chorus). To facilitate abstraction over multiple training examples, we develop a latent structural repetition feature, which summarizes the repetitive structure of a song of any length in a fixed-dimensional representation. Experimental results demonstrate that the proposed method efficiently integrates heterogeneous features, and improves segmentation accuracy.",
"title": ""
},
{
"docid": "94ade8e5d8984506b2500835f973fc56",
"text": "Sentiment analysis is a text categorization problem that consists in automatically assigning text documents to pre- defined classes that represent sentiments or a positive/negative opinion about a subject. To solve this task, machine learning techniques can be used. However, in order to achieve good gen- eralization, these techniques require a thorough pre-processing and an apropriate data representation. To deal with these fundamental issues, this work proposes the use of convolutional neural networks and density-based clustering algorithms. The word representations used in this work were obtained from vectors previously trained in an unsupervised way, denominated word embeddings. These representations are able to capture syntactic and semantic information of words, which leads to similar words to be projected closer together in the semantic space. In this scenario, in order to improve the performance of the convolutional neural network, the use of a clustering algorithm in the semantic space to extract additional information from the data is proposed. A density-based clustering algorithm was used to detect and remove outliers from the documents to be classified before these documents were used to train the con- volutional neural network. We conducted experiments with two different embeddings across three datasets in order to validate the effectiveness of our method. Results show that removing outliers from documents is capable of slightly improving the accuracy of the model and reducing computational cost for the non-static training approach. (0)",
"title": ""
},
{
"docid": "3f429e92d9f2fbe0877ead4a4c6628bf",
"text": "Current adaptive mixed criticality scheduling policies assume a high criticality mode in which all low criticality tasks are descheduled to ensure that high criticality tasks can meet timing constraints derived from certification approved methods. In this paper we present a new scheduling policy, Adaptive Mixed Criticality - Weakly Hard, which provides a guaranteed minimum quality of service for low criticality tasks in the event of a criticality mode change. We derive response time based schedulability tests for this model. Empirical evaluations are then used to assess the relative performance against previously published policies and their schedulability tests.",
"title": ""
},
{
"docid": "97d56e588b70911104d3a83cbdbc7a67",
"text": "This study describes a nose prosthetic rehabilitation using computer-aided design and manufacturing (CAD-CAM) technology after facial disfigurement because of a total rhinectomy. A patient with a total rhinectomy was scheduled for a nasal prosthesis. Based on the 3-D model of the patient’s face reconstructed with the CT data, a four-piece mould for the nasal prosthesis was prototyped using a CAD-CAM procedure. Conventional silicone was processed with this physical mould to fabricate the definitive nasal prosthesis. A silicone nasal prosthesis was manufactured. The size, shape, and cosmetic look of the prosthesis were satisfactory and matched the nasal defect area well. The protocol presented herein illustrates favorable clinical treatment outcomes in the prosthetic rehabilitation after a total rhinectomy by means of CAD-CAM technology.",
"title": ""
},
{
"docid": "dbcfb877dae759f9ad1e451998d8df38",
"text": "Detection and tracking of humans in video streams is important for many applications. We present an approach to automatically detect and track multiple, possibly partially occluded humans in a walking or standing pose from a single camera, which may be stationary or moving. A human body is represented as an assembly of body parts. Part detectors are learned by boosting a number of weak classifiers which are based on edgelet features. Responses of part detectors are combined to form a joint likelihood model that includes an analysis of possible occlusions. The combined detection responses and the part detection responses provide the observations used for tracking. Trajectory initialization and termination are both automatic and rely on the confidences computed from the detection responses. An object is tracked by data association and meanshift methods. Our system can track humans with both inter-object and scene occlusions with static or non-static backgrounds. Evaluation results on a number of images and videos and comparisons with some previous methods are given.",
"title": ""
},
{
"docid": "8e3ec22c60c9df59570d1781cf03c627",
"text": "We examine the recent move from a rhetoric of “users” toward one of “makers,” “crafters,” and “hackers” within HCI discourse. Through our analysis, we make several contributions. First, we provide a general overview of the structure and common framings within research on makers. We discuss how these statements reconfigure themes of empowerment and progress that have been central to HCI rhetoric since the field's inception. In the latter part of the article, we discuss the consequences of these shifts for contemporary research problems. In particular, we explore the problem of designed obsolescence, a core issue for Sustainable Interaction Design (SID) research. We show how the framing of the maker, as an empowered subject, presents certain opportunities and limitations for this research discourse. Finally, we offer alternative framings of empowerment that can expand maker discourse and its use in contemporary research problems such as SID.",
"title": ""
},
{
"docid": "b5923306d14f598d90f69183032af9ee",
"text": "This paper introduces a base station antenna system as a future cellular technology. The base station antenna system is the key to achieving high-speed data transmission. It is particularly important to improve the frequency reuse factor as one of the roles of a base station. Furthermore, in order to solve the interference problem due to the same frequency being used by the macro cell and the small cell, the author focuses on beam and null control using an AAS (Active Antenna System) and elucidates their effects through area simulations and field tests. The results showed that AAS can improve the SINR (signal to interference-plusnoise ratio) of the small cell area inside macro cells. The paper shows that cell quality performance can be improved by incorporating the AAS into a cellular base station as its antenna system for beyond 4G radio access technology including the 5G cellular system. key words: active antenna system, null, cellular, base station antenna, small cell",
"title": ""
},
{
"docid": "0320ebc09663ecd6bf5c39db472fcbde",
"text": "The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g., <boy, tall>), (3) actions (e.g., <boy, playing>), and (4) interactions (e.g., <boy, riding, a horse >). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202,000 facts and 814,000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.",
"title": ""
}
] |
scidocsrr
|
52609b38899cdb4b0d71d361dd7443b1
|
One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling
|
[
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
},
{
"docid": "cdc77cc0dfb4dc9c91e20c3118b1d1ee",
"text": "Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a novel speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer nonzero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling.",
"title": ""
}
] |
[
{
"docid": "fb6ed16b64cb3246e6670fd718bda819",
"text": "There are various premier software packages available in the market, either for free use or found at a high price, to analyse the century old electrical power system. Universities in the developed countries expend thousands of dollars per year to bring these commercial applications to the desktops of students, teachers and researchers. For teachers and researchers this is regarded as a good long-term investment. As well, for the postgraduate students these packages are very important to validate the model developed during course of study. For simulating different test cases and/or standard systems, which are readily available with these widely used commercial software packages, such enriched software plays an important role. But in case of underdeveloped and developing countries the high amount of money needed to be expended per year to purchase commercial software is a farfetched idea. In addition, undergraduate students who are learning power system for the very first time find these packages incongruous for them since they are not familiar with the detailed input required to run the program. Even if it is a simple load flow program to find the steady-state behaviour of the system, or an elementary symmetrical fault analysis test case these packages require numerous inputs since they mimic a practical power system rather than considering simple test cases. In effect, undergraduate students tend to stay away from these packages. So rather than aiding the study in power system, these create a bad impression on students‘ mind about the very much interesting course.",
"title": ""
},
{
"docid": "4eb9808144e04bf0c01121f2ec7261d2",
"text": "The rise of multicore computing has greatly increased system complexity and created an additional burden for software developers. This burden is especially troublesome when it comes to optimizing software on modern computing systems. Autonomic or adaptive computing has been proposed as one method to help application programmers handle this complexity. In an autonomic computing environment, system services monitor applications and automatically adapt their behavior to increase the performance of the applications they support. Unfortunately, applications often run as performance black-boxes and adaptive services must infer application performance from low-level information or rely on system-specific ad hoc methods. This paper proposes a standard framework, Application Heartbeats, which applications can use to communicate both their current and target performance and which autonomic services can use to query these values.\n The Application Heartbeats framework is designed around the well-known idea of a heartbeat. At important points in the program, the application registers a heartbeat. In addition, the interface allows applications to express their performance in terms of a desired heart rate and/or a desired latency between specially tagged heartbeats. Thus, the interface provides a standard method for an application to directly communicate its performance and goals while allowing autonomic services access to this information. Thus, Heartbeat-enabled applications are no longer performance black-boxes. This paper presents the Applications Heartbeats interface, characterizes two reference implementations (one suitable for clusters and one for multicore), and illustrates the use of Heartbeats with several examples of systems adapting behavior based on feedback from heartbeats.",
"title": ""
},
{
"docid": "3c4f19544e9cc51d307c6cc9aea63597",
"text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.",
"title": ""
},
{
"docid": "c25a3f84fab51b6cd659bd3d168a33d8",
"text": "Achieving automatic interoperability among systems with diverse data structures and languages expressing different viewpoints is a goal that has been difficult to accomplish. This paper describes S-Match, an open source semantic matching framework that tackles the semantic interoperability problem by transforming several data structures such as business catalogs, web directories, conceptual models and web services descriptions into lightweight ontologies and establishing semantic correspondences between them. The framework is the first open source semantic matching project that includes three different algorithms tailored for specific domains and provides an extensible API for developing new algorithms, including possibility to plug-in specific background knowledge according to the characteristics of each application domain.",
"title": ""
},
{
"docid": "41a287c7ecc5921aedfa5b733a928178",
"text": "This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.",
"title": ""
},
{
"docid": "082894a8498a5c22af8903ad8ea6399a",
"text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.",
"title": ""
},
{
"docid": "a72fb284dd89d01ec72138b072f2ed52",
"text": "To better understand the reward circuitry in human brain, we conducted activation likelihood estimation (ALE) and parametric voxel-based meta-analyses (PVM) on 142 neuroimaging studies that examined brain activation in reward-related tasks in healthy adults. We observed several core brain areas that participated in reward-related decision making, including the nucleus accumbens (NAcc), caudate, putamen, thalamus, orbitofrontal cortex (OFC), bilateral anterior insula, anterior cingulate cortex (ACC) and posterior cingulate cortex (PCC), as well as cognitive control regions in the inferior parietal lobule and prefrontal cortex (PFC). The NAcc was commonly activated by both positive and negative rewards across various stages of reward processing (e.g., anticipation, outcome, and evaluation). In addition, the medial OFC and PCC preferentially responded to positive rewards, whereas the ACC, bilateral anterior insula, and lateral PFC selectively responded to negative rewards. Reward anticipation activated the ACC, bilateral anterior insula, and brain stem, whereas reward outcome more significantly activated the NAcc, medial OFC, and amygdala. Neurobiological theories of reward-related decision making should therefore take distributed and interrelated representations of reward valuation and valence assessment into account.",
"title": ""
},
{
"docid": "14fac379b3d4fdfc0024883eba8431b3",
"text": "PURPOSE\nTo summarize the literature addressing subthreshold or nondamaging retinal laser therapy (NRT) for central serous chorioretinopathy (CSCR) and to discuss results and trends that provoke further investigation.\n\n\nMETHODS\nAnalysis of current literature evaluating NRT with micropulse or continuous wave lasers for CSCR.\n\n\nRESULTS\nSixteen studies including 398 patients consisted of retrospective case series, prospective nonrandomized interventional case series, and prospective randomized clinical trials. All studies but one evaluated chronic CSCR, and laser parameters varied greatly between studies. Mean central macular thickness decreased, on average, by ∼80 μm by 3 months. Mean best-corrected visual acuity increased, on average, by about 9 letters by 3 months, and no study reported a decrease in acuity below presentation. No retinal complications were observed with the various forms of NRT used, but six patients in two studies with micropulse laser experienced pigmentary changes in the retinal pigment epithelium attributed to excessive laser settings.\n\n\nCONCLUSION\nBased on the current evidence, NRT demonstrates efficacy and safety in 12-month follow-up in patients with chronic and possibly acute CSCR. The NRT would benefit from better standardization of the laser settings and understanding of mechanisms of action, as well as further prospective randomized clinical trials.",
"title": ""
},
{
"docid": "78cdf97dc577740b0b9e6815b6c1c4f4",
"text": "Today, the technology for video streaming over the Internet is converging towards a paradigm named HTTP-based adaptive streaming (HAS), which brings two new features. First, by using HTTP/TCP, it leverages network-friendly TCP to achieve both firewall/NAT traversal and bandwidth sharing. Second, by pre-encoding and storing the video in a number of discrete rate levels, it introduces video bitrate adaptivity in a scalable way so that the video encoding is excluded from the closed-loop adaptation. A conventional wisdom in HAS design is that since the TCP throughput observed by a client would indicate the available network bandwidth, it could be used as a reliable reference for video bitrate selection. We argue that this is no longer true when HAS becomes a substantial fraction of the total network traffic. We show that when multiple HAS clients compete at a network bottleneck, the discrete nature of the video bitrates results in difficulty for a client to correctly perceive its fair-share bandwidth. Through analysis and test bed experiments, we demonstrate that this fundamental limitation leads to video bitrate oscillation and other undesirable behaviors that negatively impact the video viewing experience. We therefore argue that it is necessary to design at the application layer using a \"probe and adapt\" principle for video bitrate adaptation (where \"probe\" refers to trial increment of the data rate, instead of sending auxiliary piggybacking traffic), which is akin, but also orthogonal to the transport-layer TCP congestion control. We present PANDA - a client-side rate adaptation algorithm for HAS - as a practical embodiment of this principle. Our test bed results show that compared to conventional algorithms, PANDA is able to reduce the instability of video bitrate selection by over 75% without increasing the risk of buffer underrun.",
"title": ""
},
{
"docid": "77908ab362e0a26e395bc2d2bf07e0ee",
"text": "In this paper we consider the problem of exploring an unknown environment by a team of robots. As in single-robot exploration the goal is to minimize the overall exploration time. The key problem to be solved therefore is to choose appropriate target points for the individual robots so that they simultaneously explore different regions of their environment. We present a probabilistic approach for the coordination of multiple robots which, in contrast to previous approaches, simultaneously takes into account the costs of reaching a target point and the utility of target points. The utility of target points is given by the size of the unexplored area that a robot can cover with its sensors upon reaching a target position. Whenever a target point is assigned to a specific robot, the utility of the unexplored area visible from this target position is reduced for the other robots. This way, a team of multiple robots assigns different target points to the individual robots. The technique has been implemented and tested extensively in real-world experiments and simulation runs. The results given in this paper demonstrate that our coordination technique significantly reduces the exploration time compared to previous approaches. '",
"title": ""
},
{
"docid": "74f90683e6daae840cdb5ffa3c1b6e4a",
"text": "Texture atlas parameterization provides an effective way to map a variety of color and data attributes from 2D texture domains onto polygonal surface meshes. However, the individual charts of such atlases are typically plagued by noticeable seams. We describe a new type of atlas which is seamless by construction. Our seamless atlas comprises all quadrilateral charts, and permits seamless texturing, as well as per-fragment down-sampling on rendering hardware and polygon simplification. We demonstrate the use of this atlas for capturing appearance attributes and producing seamless renderings.",
"title": ""
},
{
"docid": "d780db3ec609d74827a88c0fa0d25f56",
"text": "Highly automated test vehicles are rare today, and (independent) researchers have often limited access to them. Also, developing fully functioning system prototypes is time and effort consuming. In this paper, we present three adaptions of the Wizard of Oz technique as a means of gathering data about interactions with highly automated vehicles in early development phases. Two of them address interactions between drivers and highly automated vehicles, while the third one is adapted to address interactions between pedestrians and highly automated vehicles. The focus is on the experimental methodology adaptations and our lessons learned.",
"title": ""
},
{
"docid": "6403b543937832f641d98b9212d2428e",
"text": "Information edge and 3 millennium predisposed so many of revolutions. Business organization with emphasize on information systems is try to gathering desirable information for decision making. Because of comprehensive change in business background and emerge of computers and internet, the business structure and needed information had change, the competitiveness as a major factor for life of organizations in information edge is preyed of information technology challenges. In this article we have reviewed in the literature of information systems and discussed the concepts of information system as a strategic tool.",
"title": ""
},
{
"docid": "79ddf1042ce5b40306e0596851da93a2",
"text": "Introduction: Recently, radiofrequency ablation (RFA) has been increasingly used for the treatment of thyroid nodules. However, immediate morphological changes associated with bipolar devices are poorly shown. Aims: To present the results of analysis of gross and microscopic alterations in human thyroid tissue induced by RFA delivered through the application of the original patented device. Materials and methods: In total, there were 37 surgically removed thyroid glands in females aged 32-67 at presentation: 16 nodules were follicular adenoma (labelled as 'parenchymal' solid benign nodules) and adenomatous colloid goitre was represented by 21 cases. The thyroid gland was routinely processed and the nodules were sliced into two parts - one was a subject for histological routine processing according to the principles that universally apply in surgical pathology, the other one was used for the RFA procedure. Results: No significant difference in size reduction between parenchymal and colloid nodules was revealed (p>0.1, t-test) straight after the treatment. In addition, RFA equally effectively induced necrosis in follicular adenoma and adenomatous colloid goitre (p>0.1, analysis of variance test). As expected, tumour size correlated with size reduction (the smaller the size of the nodule, the greater percentage of the nodule volume that was ablated): r=-0.48 (p<0.0001). Conclusion: The results make it possible to move from ex vivo experiments to clinical practice.",
"title": ""
},
{
"docid": "d548f1b5593109d68c9f9167d18909ed",
"text": "| Recently, the development of three-dimensional large-scale integration (3D-LSI) has been accelerated. Its stage has changed from the research level or limited production level to the investigation level with a view to mass production [1]–[10]. The 3D-LSI using through-silicon via (TSV) has the simplest structure and is expected to realize a high-performance, highfunctionality, and high-density LSI cube. This paper describes the current and future 3D-LSI technologies with TSV.",
"title": ""
},
{
"docid": "7170a9d4943db078998e1844ad67ae9e",
"text": "Privacy has become increasingly important to the database community which is reflected by a noteworthy increase in research papers appearing in the literature. While researchers often assume that their definition of “privacy” is universally held by all readers, this is rarely the case; so many papers addressing key challenges in this domain have actually produced results that do not consider the same problem, even when using similar vocabularies. This paper provides an explicit definition of data privacy suitable for ongoing work in data repositories such as a DBMS or for data mining. The work contributes by briefly providing the larger context for the way privacy is defined legally and legislatively but primarily provides a taxonomy capable of thinking of data privacy technologically. We then demonstrate the taxonomy’s utility by illustrating how this perspective makes it possible to understand the important contribution made by researchers to the issue of privacy. The conclusion of this paper is that privacy is indeed multifaceted so no single current research effort adequately addresses the true breadth of the issues necessary to fully understand the scope of this important issue.",
"title": ""
},
{
"docid": "ecb82d413c47cff0e054c76360f09d48",
"text": "Grades often decline during the high school transition, creating stress. The present research integrates the biopsychosocial model of challenge and threat with the implicit theories model to understand who shows maladaptive stress responses. A diary study measured declines in grades in the first few months of high school: salivary cortisol (N = 360 students, N = 3,045 observations) and daily stress appraisals (N = 499 students, N = 3,854 observations). Students who reported an entity theory of intelligence (i.e., the belief that intelligence is fixed) showed higher cortisol when grades were declining. Moreover, daily academic stressors showed a different lingering effect on the next day's cortisol for those with different implicit theories. Findings support a process model through which beliefs affect biological stress responses during difficult adolescent transitions.",
"title": ""
},
{
"docid": "804b320c6f5b07f7f4d7c5be29c572e9",
"text": "Softmax is the most commonly used output function for multiclass problems and is widely used in areas such as vision, natural language processing, and recommendation. A softmax model has linear costs in the number of classes which makes it too expensive for many real-world problems. A common approach to speed up training involves sampling only some of the classes at each training step. It is known that this method is biased and that the bias increases the more the sampling distribution deviates from the output distribution. Nevertheless, almost all recent work uses simple sampling distributions that require a large sample size to mitigate the bias. In this work, we propose a new class of kernel based sampling methods and develop an efficient sampling algorithm. Kernel based sampling adapts to the model as it is trained, thus resulting in low bias. It can also be easily applied to many models because it relies only on the model’s last hidden layer. We empirically study the trade-off of bias, sampling distribution and sample size and show that kernel based sampling results in low bias with few samples.",
"title": ""
},
{
"docid": "2283e43c2bad5ac682fe185cb2b8a9c1",
"text": "As widely recognized in the literature, information technology (IT) investments have several special characteristics that make assessing their costs and benefits complicated. Here, we address the problem of evaluating a web content management system for both internal and external use. The investment is presently undergoing an evaluation process in a multinational company. We aim at making explicit the desired benefits and expected risks of the system investment. An evaluation hierarchy at general level is constructed. After this, a more detailed hierarchy is constructed to take into account the contextual issues. To catch the contextual issues key company representatives were interviewed. The investment alternatives are compared applying the principles of the Analytic Hierarchy Process (AHP). Due to the subjective and uncertain characteristics of the strategic IT investments a wide range of sensitivity analyses is performed.",
"title": ""
},
{
"docid": "efc1b9f824b285e686bb6e0226b0a407",
"text": "In the last 15 years, the threat of Muslim violent extremists emerging within Western countries has grown. Terrorist organizations based in the Middle East are recruiting Muslims in the United States and Europe via social media. Yet we know little about the factors that would drive Muslim immigrants in a Western country to heed this call and become radicalized, even at the cost of their own lives. Research into the psychology of terrorism suggests that a person’s cultural identity plays a key role in radicalization, so we surveyed 198 Muslims in the United States about their cultural identities and attitudes toward extremism. We found that immigrants who identify with neither their heritage culture nor the culture they are living in feel marginalized and insignificant. Experiences of discrimination make the situation worse and lead to greater support for radicalism, which promises a sense of meaning and life purpose. Such insights could be of use to policymakers engaged in efforts against violent extremism, including terrorism.",
"title": ""
}
] |
scidocsrr
|
dce030af2950ea5a1bed59ecc082216f
|
Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12
|
[
{
"docid": "60161ef0c46b4477f0cf35356bc3602c",
"text": "Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those * Alexandra Wood is a Fellow at the Berkman Klein Center for Internet & Society at Harvard University. Micah Altman is Director of Research at MIT Libraries. Aaron Bembenek is a PhD student in computer science at Harvard University. Mark Bun is a Google Research Fellow at the Simons Institute for the Theory of Computing. Marco Gaboardi is an Assistant Professor in the Computer Science and Engineering department at the State University of New York at Buffalo. James Honaker is a Research Associate at the Center for Research on Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Sciences. Kobbi Nissim is a McDevitt Chair in Computer Science at Georgetown University and an Affiliate Professor at Georgetown University Law Center; work towards this document was completed in part while the Author was visiting the Center for Research on Computation and Society at Harvard University. David R. O’Brien is a Senior Researcher at the Berkman Klein Center for Internet & Society at Harvard University. Thomas Steinke is a Research Staff Member at IBM Research – Almaden. Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at Harvard University. This Article is the product of a working group of the Privacy Tools for Sharing Research Data project at Harvard University (http://privacytools.seas.harvard.edu). The working group discussions were led by Kobbi Nissim. Alexandra Wood and Kobbi Nissim are the lead Authors of this Article. Working group members Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O’Brien, Thomas Steinke, Salil Vadhan, and Alexandra Wood contributed to the conception of the Article and to the writing. The Authors thank John Abowd, Scott Bradner, Cynthia Dwork, Simson Garfinkel, Caper Gooden, Deborah Hurley, Rachel Kalmar, Georgios Kellaris, Daniel Muise, Michel Reymond, and Michael Washington for their many valuable comments on earlier versions of this Article. A preliminary version of this work was presented at the 9th Annual Privacy Law Scholars Conference (PLSC 2017), and the Authors thank the participants for contributing thoughtful feedback. The original manuscript was based upon work supported by the National Science Foundation under Grant No. CNS-1237235, as well as by the Alfred P. Sloan Foundation. The Authors’ subsequent revisions to the manuscript were supported, in part, by the US Census Bureau under cooperative agreement no. CB16ADR0160001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the Authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the US Census Bureau. 210 VAND. J. ENT. & TECH. L. [Vol. 21:1:209 currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.",
"title": ""
}
] |
[
{
"docid": "72e5b92632824d3633539727125763bc",
"text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.",
"title": ""
},
{
"docid": "59445fae343192fb6a95b57e0801dd0b",
"text": "Online anomaly detection is an important step in data center management, requiring light-weight techniques that provide sufficient accuracy for subsequent diagnosis and management actions. This paper presents statistical techniques based on the Tukey and Relative Entropy statistics, and applies them to data collected from a production environment and to data captured from a testbed for multi-tier web applications running on server class machines. The proposed techniques are lightweight and improve over standard Gaussian assumptions in terms of performance.",
"title": ""
},
{
"docid": "3f6572916ac697188be30ef798acbbff",
"text": "The vector representation of Bengali words using word2vec model (Mikolov et al. (2013)) plays an important role in Bengali sentiment classification. It is observed that the words that are from same context stay closer in the vector space of word2vec model and they are more similar than other words. In this article, a new approach of sentiment classification of Bengali comments with word2vec and Sentiment extraction of words are presented. Combining the results of word2vec word co-occurrence score with the sentiment polarity score of the words, the accuracy obtained is 75.5%.",
"title": ""
},
{
"docid": "ec6d5a1b77c1e346a9b30e2477b39510",
"text": "The locomotion performances of a quadruped robot with compliant feet based on closed-chain mechanism legs are presented. The legs of this quadruped robot were made up of six-bar linkage mechanism with one degree of freedom. And a special foot trajectory could be gained through kinematic analysis and optimum design of the six-bar linkage mechanism. In order to reduce the impact force of quadruped robot's walking on the ground, two semicircle feet with different thickness were designed as compliant feet. The experimental results of this quadruped robot with different stiffness feet showed that the semicircle feet could reduce the driving torque and current of motors. This primary investigation illustrated that the compliant feet could improve the locomotion performance of a quadruped robot based on closed-chain mechanism legs.",
"title": ""
},
{
"docid": "cbcb20173f4e012253c51020932e75a6",
"text": "We investigate methods for combining multiple selfsupervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.",
"title": ""
},
{
"docid": "9520b99708d905d3713867fac14c3814",
"text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.",
"title": ""
},
{
"docid": "4322f123ff6a1bd059c41b0037bac09b",
"text": "Nowadays, as a beauty-enhancing product, clothing plays an important role in human's social life. In fact, the key to a proper outfit usually lies in the harmonious clothing matching. Nevertheless, not everyone is good at clothing matching. Fortunately, with the proliferation of fashion-oriented online communities, fashion experts can publicly share their fashion tips by showcasing their outfit compositions, where each fashion item (e.g., a top or bottom) usually has an image and context metadata (e.g., title and category). Such rich fashion data offer us a new opportunity to investigate the code in clothing matching. However, challenges co-exist with opportunities. The first challenge lies in the complicated factors, such as color, material and shape, that affect the compatibility of fashion items. Second, as each fashion item involves multiple modalities (i.e., image and text), how to cope with the heterogeneous multi-modal data also poses a great challenge. Third, our pilot study shows that the composition relation between fashion items is rather sparse, which makes traditional matrix factorization methods not applicable. Towards this end, in this work, we propose a content-based neural scheme to model the compatibility between fashion items based on the Bayesian personalized ranking (BPR) framework. The scheme is able to jointly model the coherent relation between modalities of items and their implicit matching preference. Experiments verify the effectiveness of our scheme, and we deliver deep insights that can benefit future research.",
"title": ""
},
{
"docid": "6ef6c1b19d8c82f500ea2b1e213d750d",
"text": "Video summarization aims to facilitate large-scale video browsing by producing short, concise summaries that are diverse and representative of original videos. In this paper, we formulate video summarization as a sequential decisionmaking process and develop a deep summarization network (DSN) to summarize videos. DSN predicts for each video frame a probability, which indicates how likely a frame is selected, and then takes actions based on the probability distributions to select frames, forming video summaries. To train our DSN, we propose an end-to-end, reinforcement learningbased framework, where we design a novel reward function that jointly accounts for diversity and representativeness of generated summaries and does not rely on labels or user interactions at all. During training, the reward function judges how diverse and representative the generated summaries are, while DSN strives for earning higher rewards by learning to produce more diverse and more representative summaries. Since labels are not required, our method can be fully unsupervised. Extensive experiments on two benchmark datasets show that our unsupervised method not only outperforms other stateof-the-art unsupervised methods, but also is comparable to or even superior than most of published supervised approaches.",
"title": ""
},
{
"docid": "f7b1416cf869a56133fcde552e935a07",
"text": "The processes that occur with normal sternal healing and potential complications related to median sternotomy are of particular interest to physical therapists. The premise of patients following sternal precautions (SP) or specific activity restrictions is the belief that avoiding certain movements will reduce risk of sternal complications. However, current research has identified that many patients remain functionally impaired long after cardiothoracic surgery. It is possible that some SP may contribute to such functional impairments. Currently, SP have several limitations including that they: (1) have no universally accepted definition, (2) are often based on anecdotal/expert opinion or at best supported by indirect evidence, (3) are mostly applied uniformly for all patients without regard to individual differences, and (4) may be overly restrictive and therefore impede ideal recovery. The purpose of this article is to present an overview of current research and commentary on median sternotomy procedures and activity restrictions. We propose that the optimal degree and duration of SP should be based on an individual patient's characteristics (eg, risk factors, comorbidities, previous activity level) that would enable physical activity to be targeted to particular limitations rather than restricting specific functional tasks and physical activity. Such patient-specific SP focusing on function may be more likely to facilitate recovery after median sternotomy and less likely to impede it.",
"title": ""
},
{
"docid": "6e1b95d0c2cff4c372f451c8636b973e",
"text": "Multiple sclerosis (MS) is a chronic inflammatory demyelinating and neurodegenerative disease of central nervous system that affects both white and gray matter. Idiopathic calcification of the basal ganglia is a rare neurodegenerative disorder of unknown cause that is characterized by sporadic or familial brain calcification. Concurrence of multiple sclerosis (MS) and idiopathic basal ganglia calcification (Fahr's disease) is very rare event. In this study, we describe a cooccurrence of idiopathic basal ganglia calcification with multiple sclerosis. The association between this disease and MS is unclear and also maybe probably coincidental.",
"title": ""
},
{
"docid": "9979f112cd2617a150721e3a2dd70739",
"text": "In the academic literature, the matching accuracy of a biometric system is typically quantified through measures such as the Receiver Operating Characteristic (ROC) curve and Cumulative Match Characteristic (CMC) curve. The ROC curve, measuring verification performance, is based on aggregate statistics of match scores corresponding to all biometric samples, while the CMC curve, measuring identification performance, is based on the relative ordering of match scores corresponding to each biometric sample (in closed-set identification). In this study, we determine whether a set of genuine and impostor match scores generated from biometric data can be reassigned to virtual identities, such that the same ROC curve can be accompanied by multiple CMC curves. The reassignment is accomplished by modeling the intra- and inter-class relationships between identities based on the “Doddington Zoo” or “Biometric Menagerie” phenomenon. The outcome of the study suggests that a single ROC curve can be mapped to multiple CMC curves in closed-set identification, and that presentation of a CMC curve should be accompanied by a ROC curve when reporting biometric system performance, in order to better understand the performance of the matcher.",
"title": ""
},
{
"docid": "e59611016466e4928a07e996d0c7a90e",
"text": "Record linkage seeks to merge databases and to remove duplicates when unique identifiers are not available. Most approaches use blocking techniques to reduce the computational complexity associated with record linkage. We review traditional blocking techniques, which typically partition the records according to a set of field attributes, and consider two variants of a method known as locality sensitive hashing, sometimes referred to as “private blocking.” We compare these approaches in terms of their recall, reduction ratio, and computational complexity. We evaluate these methods using different synthetic datafiles and conclude with a discussion of privacy-related issues.",
"title": ""
},
{
"docid": "515a1d01abc880c1b6f560ce5a10207d",
"text": "We report on a compiler for Warp, a high-performance systolic array developed at Carnegie Mellon. This compiler enhances the usefulness of Warp significantly and allows application programmers to code substantial algorithms.\nThe compiler combines a novel programming model, which is based on a model of skewed computation for the array, with powerful optimization techniques. Programming in W2 (the language accepted by the compiler) is orders of magnitude easier than coding in microcode, the only alternative available previously.",
"title": ""
},
{
"docid": "4622df5210b363fbbecc9653894f9734",
"text": "Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.",
"title": ""
},
{
"docid": "95a7892f685321d9c4608fbdc67b08aa",
"text": "In order to identify and explore the strengths and weaknesses of business intelligence (BI) initiatives, managers in charge need to assess the maturity of their BI efforts. For this, a wide range of maturity models has been developed, but these models often focus on technical details and do not address the potential value proposition of BI. Based on an extensive literature review and an empirical study, we develop and evaluate a theoretical model of impact-oriented BI maturity. Building on established IS theories, the model integrates BI deployment, BI usage, individual impact, and organizational performance. This conceptualization helps to refocus the topic of BI maturity to business needs and can be used as a theoretical foundation for future research.",
"title": ""
},
{
"docid": "f3a044835e9cbd0c13218ab0f9c06dd1",
"text": "Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface.",
"title": ""
},
{
"docid": "4074b8cd9b869a7a57f2697b97139308",
"text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a similarity space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define various operations for our formalization, both for creating new concepts from old ones and for measuring relations between concepts. We present an illustrative toy-example and sketch a research project on concept formation that is based on both our formalization and its implementation.",
"title": ""
},
{
"docid": "ec9f793761ebd5199c6a2cc8c8215ac4",
"text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.",
"title": ""
},
{
"docid": "842ee1e812d408df7e6f7dfd95e32a36",
"text": "Abstract Phase segregation, the process by which the components of a binary mixture spontaneously separate, is a key process in the evolution and design of many chemical, mechanical, and biological systems. In this work, we present a data-driven approach for the learning, modeling, and prediction of phase segregation. A direct mapping between an initially dispersed, immiscible binary fluid and the equilibrium concentration field is learned by conditional generative convolutional neural networks. Concentration field predictions by the deep learning model conserve phase fraction, correctly predict phase transition, and reproduce area, perimeter, and total free energy distributions up to 98% accuracy.",
"title": ""
},
{
"docid": "63e45222ea9627ce22e9e90fc1ca4ea1",
"text": "A soft switching three-transistor push-pull(TTPP)converter is proposed in this paper. The 3rd transistor is inserted in the primary side of a traditional push-pull converter. Two primitive transistors can achieve zero-voltage-switching (ZVS) easily under a wide load range, the 3rd transistor can also realize zero-voltage-switching assisted by leakage inductance. The rated voltage of the 3rd transistor is half of that of the main transistors. The operation theory is explained in detail. The soft-switching realization conditions are derived. An 800 W with 83.3 kHz switching frequency prototype has been built. The experimental result is provided to verify the analysis.",
"title": ""
}
] |
scidocsrr
|
03d6eab61aff6018a9aaf59949895721
|
Enacted Routines in Agile and Waterfall Processes
|
[
{
"docid": "67d704317471c71842a1dfe74ddd324a",
"text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "349caca78b6d21b5f8853b41a8201429",
"text": "OBJECTIVE\nTo evaluate the effectiveness of a functional thumb orthosis on the dominant hand of patients with rheumatoid arthritis and boutonniere thumb.\n\n\nMETHODS\nForty patients with rheumatoid arthritis and boutonniere deformity of the thumb were randomly distributed into two groups. The intervention group used the orthosis daily and the control group used the orthosis only during the evaluation. Participants were evaluated at baseline as well as after 45 and 90 days. Assessments were preformed using the O'Connor Dexterity Test, Jamar dynamometer, pinch gauge, goniometry and the Health Assessment Questionnaire. A visual analogue scale was used to assess thumb pain in the metacarpophalangeal joint.\n\n\nRESULTS\nPatients in the intervention group experienced a statistically significant reduction in pain. The thumb orthosis did not disrupt grip and pinch strength, function, Health Assessment Questionnaire score or dexterity in either group.\n\n\nCONCLUSION\nThe use of thumb orthosis for type I and type II boutonniere deformities was effective in relieving pain.",
"title": ""
},
{
"docid": "01809d609802d949aa8c1604db29419d",
"text": "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish finegrained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using 20% and 33% less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.",
"title": ""
},
{
"docid": "f6fc0992624fd3b3e0ce7cc7fc411154",
"text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.",
"title": ""
},
{
"docid": "0a31ab53b887cf231d7ca1a286763e5f",
"text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.",
"title": ""
},
{
"docid": "89c3c358aca518a787c8c5e8151b161c",
"text": "In this paper, we introduce the progressive simplicial complex (PSC) representation, a new format for storing and transmitting triangulated geometric models. Like the earlier progressive mesh (PM) representation, it captures a given model as a coarse base model together with a sequence of refinement transformations that progressively recover detail. The PSC representation makes use of a more general refinement transformation, allowing the given model to be an arbitrary triangulation (e.g. any dimension, non-orientable, non-manifold, non-regular), and the base model to always consist of a single vertex. Indeed, the sequence of refinement transformations encodes both the geometry and the topology of the model in a unified multiresolution framework. The PSC representation retains the advantages of PM’s. It defines a continuous sequence of approximating models for runtime level-of-detail control, allows smooth transitions between any pair of models in the sequence, supports progressive transmission, and offers a space-efficient representation. Moreover, by allowing changes to topology, the PSC sequence of approximations achieves better fidelity than the corresponding PM sequence. We develop an optimization algorithm for constructing PSC representations for graphics surface models, and demonstrate the framework on models that are both geometrically and topologically complex. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional",
"title": ""
},
{
"docid": "c47627b3485f6d90788150c4a911c16b",
"text": "OBJECTIVES\nSeveral lines of evidence suggest that the prefrontal cortex is involved in working memory. Our goal was to determine whether transient functional disruption of the dorsolateral prefrontal cortex (DLPFC) would impair performance in a sequential-letter working memory task.\n\n\nMETHODS\nSubjects were shown sequences of letters and asked to state whether the letter just displayed was the same as the one presented 3-back. Single-pulse transcranial magnetic stimulation (TMS) was applied over the DLPFC between letter presentations.\n\n\nRESULTS\nTMS applied over the left DLPFC resulted in increased errors relative to no TMS controls. TMS over the right DLPFC did not alter working memory performance.\n\n\nCONCLUSION\nOur results indicate that the left prefrontal cortex has a crucial role in at least one type of working memory.",
"title": ""
},
{
"docid": "081dbece10d1363eca0ac01ce0260315",
"text": "With the surge of mobile internet traffic, Cloud RAN (C-RAN) becomes an innovative architecture to help mobile operators maintain profitability and financial growth as well as to provide better services to the customers. It consists of Base Band Units (BBU) of several base stations, which are co-located in a secured place called Central Office and connected to Radio Remote Heads (RRH) via high bandwidth, low latency links. With BBU centralization in C-RAN, handover, the most important feature for mobile communications, could achieve simplified procedure or improved performance. In this paper, we analyze the handover performance of C-RAN over a baseline decentralized RAN (D-RAN) for GSM, UMTS and LTE systems. The results indicate that, lower total average handover interrupt time could be achieved in GSM thanks to the synchronous nature of handovers in C-RAN. For UMTS, inter-NodeB soft handover in D-RAN would become intra-pool softer handover in C-RAN. This brings some gains in terms of reduced signalling, less Iub transport bearer setup and reduced transport bandwidth requirement. For LTE X2-based inter-eNB handover, C-RAN could reduce the handover delay and to a large extent eliminate the risk of UE losing its connection with the serving cell while still waiting for the handover command, which in turn decrease the handover failure rate.",
"title": ""
},
{
"docid": "becd66e0637b9b6dd07b45e6966227d6",
"text": "In real life, when telling a person’s age from his/her face, we tend to look at his/her whole face first and then focus on certain important regions like eyes. After that we will focus on each particular facial feature individually like the nose or the mouth so that we can decide the age of the person. Similarly, in this paper, we propose a new framework for age estimation, which is based on human face sub-regions. Each sub-network in our framework takes the input of two images each from human facial region. One of them is the global face, and the other is a vital sub-region. Then, we combine the predictions from different sub-regions based on a majority voting method. We call our framework Multi-Region Network Prediction Ensemble (MRNPE) and evaluate our approach using two popular public datasets: MORPH Album II and Cross Age Celebrity Dataset (CACD). Experiments show that our method outperforms the existing state-of-the-art age estimation methods by a significant margin. The Mean Absolute Errors (MAE) of age estimation are dropped from 3.03 to 2.73 years on the MORPH Album II and 4.79 to 4.40 years on the CACD.",
"title": ""
},
{
"docid": "a13ff1e2192c9a7e4bcfdf5e1ac39538",
"text": "Before graduating from X as Waymo, Google's self-driving car project had been using custom lidars for several years. In their latest revision, the lidars are designed to meet the challenging requirements we discovered in autonomously driving 2 million highly-telemetered miles on public roads. Our goal is to approach price points required for advanced driver assistance systems (ADAS) while meeting the performance needed for safe self-driving. This talk will review some history of the project and describe a few use-cases for lidars on Waymo cars. Out of that will emerge key differences between lidars for self-driving and traditional applications (e.g. mapping) which may provide opportunities for semiconductor lasers.",
"title": ""
},
{
"docid": "fc2a7c789f742dfed24599997845b604",
"text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.",
"title": ""
},
{
"docid": "a8ad71932fa864edc2349abcc366c509",
"text": "In response to increasingly sophisticated state-sponsored Internet censorship, recent work has proposed a new approach to censorship resistance: end-to-middle proxying. This concept, developed in systems such as Telex, Decoy Routing, and Cirripede, moves anticensorship technology into the core of the network, at large ISPs outside the censoring country. In this paper, we focus on two technical obstacles to the deployment of certain end-to-middle schemes: the need to selectively block flows and the need to observe both directions of a connection. We propose a new construction, TapDance, that removes these requirements. TapDance employs a novel TCP-level technique that allows the anticensorship station at an ISP to function as a passive network tap, without an inline blocking component. We also apply a novel steganographic encoding to embed control messages in TLS ciphertext, allowing us to operate on HTTPS connections even under asymmetric routing. We implement and evaluate a TapDance prototype that demonstrates how the system could function with minimal impact on an ISP’s network operations.",
"title": ""
},
{
"docid": "879af50edd27c74bde5b656d0421059a",
"text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.",
"title": ""
},
{
"docid": "df3ef3feeaf787315188db2689dc6fb9",
"text": "Multi-class weather classification from single images is a fundamental operation in many outdoor computer vision applications. However, it remains difficult and the limited work is carried out for addressing the difficulty. Moreover, existing method is based on the fixed scene. In this paper we present a method for any scenario multi-class weather classification based on multiple weather features and multiple kernel learning. Our approach extracts multiple weather features and takes properly processing. By combining these features into high dimensional vectors, we utilize multiple kernel learning to learn an adaptive classifier. We collect an outdoor image set that contains 20K images called MWI (Multi-class Weather Image) set. Experimental results show that the proposed method can efficiently recognize weather on MWI dataset.",
"title": ""
},
{
"docid": "03d11f57ae5fbd09f10baaf4c9a29a55",
"text": "The standard approach to computer-aided language translation is post-editing: a machine generates a single translation that a human translator corrects. Recent studies have shown this simple technique to be surprisingly effective, yet it underutilizes the complementary strengths of precision-oriented humans and recall-oriented machines. We present Predictive Translation Memory, an interactive, mixed-initiative system for human language translation. Translators build translations incrementally by considering machine suggestions that update according to the user's current partial translation. In a large-scale study, we find that professional translators are slightly slower in the interactive mode yet produce slightly higher quality translations despite significant prior experience with the baseline post-editing condition. Our analysis identifies significant predictors of time and quality, and also characterizes interactive aid usage. Subjects entered over 99% of characters via interactive aids, a significantly higher fraction than that shown in previous work.",
"title": ""
},
{
"docid": "b2e689cc561569f2c87e72aa955b54fe",
"text": "Ensemble learning is attracting much attention from pattern recognition and machine learning domains for good generalization. Both theoretical and experimental researches show that combining a set of accurate and diverse classifiers will lead to a powerful classification system. An algorithm, called FS-PP-EROS, for selective ensemble of rough subspaces is proposed in this paper. Rough set-based attribute reduction is introduced to generate a set of reducts, and then each reduct is used to train a base classifier. We introduce an accuracy-guided forward search and post-pruning strategy to select part of the base classifiers for constructing an efficient and effective ensemble system. The experiments show that classification accuracies of ensemble systems with accuracy-guided forward search strategy will increase at first, arrive at a maximal value, then decrease in sequentially adding the base classifiers. We delete the base classifiers added after the maximal accuracy. The experimental results show that the proposed ensemble systems outperform bagging and random subspace methods in terms of accuracy and size of ensemble systems. FS-PP-EROS can keep or improve the classification accuracy with very few base classifiers, which leads to a powerful and compact classification system. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a0dbf8e57a7e11f88bc3ed14a1eabad7",
"text": "Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.",
"title": ""
},
{
"docid": "47d8feb4c7ee6bc6e2b2b9bd21591a3b",
"text": "BACKGROUND\nAlthough local anesthetics (LAs) are hyperbaric at room temperature, density drops within minutes after administration into the subarachnoid space. LAs become hypobaric and therefore may cranially ascend during spinal anesthesia in an uncontrolled manner. The authors hypothesized that temperature and density of LA solutions have a nonlinear relation that may be described by a polynomial equation, and that conversion of this equation may provide the temperature at which individual LAs are isobaric.\n\n\nMETHODS\nDensity of cerebrospinal fluid was measured using a vibrating tube densitometer. Temperature-dependent density data were obtained from all LAs commonly used for spinal anesthesia, at least in triplicate at 5 degrees, 20 degrees, 30 degrees, and 37 degrees C. The hypothesis was tested by fitting the obtained data into polynomial mathematical models allowing calculations of substance-specific isobaric temperatures.\n\n\nRESULTS\nCerebrospinal fluid at 37 degrees C had a density of 1.000646 +/- 0.000086 g/ml. Three groups of local anesthetics with similar temperature (T, degrees C)-dependent density (rho) characteristics were identified: articaine and mepivacaine, rho1(T) = 1.008-5.36 E-06 T2 (heavy LAs, isobaric at body temperature); L-bupivacaine, rho2(T) = 1.007-5.46 E-06 T2 (intermediate LA, less hypobaric than saline); bupivacaine, ropivacaine, prilocaine, and lidocaine, rho3(T) = 1.0063-5.0 E-06 T (light LAs, more hypobaric than saline). Isobaric temperatures (degrees C) were as follows: 5 mg/ml bupivacaine, 35.1; 5 mg/ml L-bupivacaine, 37.0; 5 mg/ml ropivacaine, 35.1; 20 mg/ml articaine, 39.4.\n\n\nCONCLUSION\nSophisticated measurements and mathematic models now allow calculation of the ideal injection temperature of LAs and, thus, even better control of LA distribution within the cerebrospinal fluid. The given formulae allow the adaptation on subpopulations with varying cerebrospinal fluid density.",
"title": ""
},
{
"docid": "b01436481aa77ebe7538e760132c5f3c",
"text": "We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.",
"title": ""
},
{
"docid": "328abff1a187a71fe77ce078e9f1647b",
"text": "A convenient way of analysing Riemannian manifolds is to embed them in Euclidean spaces, with the embedding typically obtained by flattening the manifold via tangent spaces. This general approach is not free of drawbacks. For example, only distances between points to the tangent pole are equal to true geodesic distances. This is restrictive and may lead to inaccurate modelling. Instead of using tangent spaces, we propose embedding into the Reproducing Kernel Hilbert Space by introducing a Riemannian pseudo kernel. We furthermore propose to recast a locality preserving projection technique from Euclidean spaces to Riemannian manifolds, in order to demonstrate the benefits of the embedding. Experiments on several visual classification tasks (gesture recognition, person re-identification and texture classification) show that in comparison to tangent-based processing and state-of-the-art methods (such as tensor canonical correlation analysis), the proposed approach obtains considerable improvements in discrimination accuracy.",
"title": ""
},
{
"docid": "6a1d1be521a4ac0d838cebe2a779b1a9",
"text": "Immunoglobulin (IgM) was isolated from the serum of four fish species, Atlantic salmon (Salmo salar L.), halibut (Hippoglossus hippoglossus L.), haddock (Melanogrammus aeglefinus L.) and cod (Gadus morhua L.) and a comparison made of some physical and biochemical properties. The molecular weight of IgM varied between the different species and between the different analytical methods used. IgM from all four species was tetrameric in serum although a proportion of the molecule was held together by noncovalent forces. Salmon and haddock IgM were composed of two IgM types as regards the overall charge whereas halibut and cod IgM were homogeneous in this respect. The molecular weight of the heavy and light chains was similar for all four species. The oligosaccharide moiety, which was N-linked and associated with the heavy chain varied from 7.8 to 11.4% of the total molecular weight. Lectin analysis indicated variable composition of the carbohydrate moiety between species. The sensitivity to PNGase and trypsin varied between the four species.",
"title": ""
}
] |
scidocsrr
|
e3ed8264cc69018f910f243fb1cf3be5
|
Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment
|
[
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
}
] |
[
{
"docid": "a23aa9d2a0a100e805e3c25399f4f361",
"text": "Cases of poisoning by oleander (Nerium oleander) were observed in several species, except in goats. This study aimed to evaluate the pathological effects of oleander in goats. The experimental design used three goats per group: the control group, which did not receive oleander and the experimental group, which received leaves of oleander (50 mg/kg/day) for six consecutive days. On the seventh day, goats received 110 mg/kg of oleander leaves four times at one-hourly interval. A last dose of 330 mg/kg of oleander leaves was given subsequently. After the last dose was administered, clinical signs such as apathy, colic, vocalizations, hyperpnea, polyuria, and moderate rumen distention were observed. Electrocardiogram revealed second-degree atrioventricular block. Death occurred on an average at 92 min after the last dosing. Microscopic evaluation revealed renal necrosis at convoluted and collector tubules and slight myocardial degeneration was observed by unequal staining of cardiomyocytes. Data suggest that goats appear to respond to oleander poisoning in a manner similar to other species.",
"title": ""
},
{
"docid": "fd3c142e5cf4761fdd541e0d8e6c584f",
"text": "In this paper, we quantify the potential gains of hybrid CDN-P2P for two of the leading CDN companies, Akamai and Limelight. We first develop novel measurement methodology for mapping the topologies of CDN networks. We then consider ISP-friendly P2P distribution schemes which work in conjunction with the CDNs to localize traffic within regions of ISPs. To evaluate these schemes, we use two recent, real-world traces: a video-on-demand trace and a large-scale software update trace. We find that hybrid CDN-P2P can significantly reduce the cost of content distribution, even when peer sharing is localized within ISPs and further localized within regions of ISPs. We conclude that hybrid CDN-P2P distribution can economically satisfy the exponential growth of Internet video content without placing an unacceptable burden on regional ISPs.",
"title": ""
},
{
"docid": "71cfe271074c299f879f30dcdb5073a9",
"text": "Instrument recognition is a fundamental task in music information retrieval, yet little has been done to predict the presence of instruments in multi-instrument music for each time frame. This task is important for not only automatic transcription but also many retrieval problems. In this paper, we use the newly released MusicNet dataset to study this front, by building and evaluating a convolutional neural network for making frame-level instrument prediction. We consider it as a multi-label classification problem for each frame and use frame-level annotations as the supervisory signal in training the network. Moreover, we experiment with different ways to incorporate pitch information to our model, with the premise that doing so informs the model the notes that are active per frame, and also encourages the model to learn relative rates of energy buildup in the harmonic partials of different instruments. Experiments show salient performance improvement over baseline methods. We also report an analysis probing how pitch information helps the instrument prediction task. Code and experiment details can be found at https://biboamy. github.io/instrument-recognition/.",
"title": ""
},
{
"docid": "09fc037f5bae784f701f507706e4eafb",
"text": "Big data analytics requires technologies to efficiently process large quantities of data. Moreover, especially in decision making, it not only requires individual intellectual capabilities in the analytical activities but also collective knowledge. Very often, people with diverse expert knowledge need to work together towards a meaningful interpretation of the associated results for new insight. Thus, a big data analysis infrastructure must both support technical innovation and effectively accommodate input from multiple human experts. In this chapter, we aim to advance our understanding on the synergy between human and machine intelligence in tackling big data analysis. Sensemaking models for big data analysis were explored and used to inform the development of a generic conceptual architecture as a means to frame the requirements of such an analysis and to position the role of both technology and human in this synergetic relationship. Two contrasting real-world use case studies were undertaken to test the applicability of the proposed architecture for the development of a supporting platform for big data analysis. Reflection on this outcome has further advanced our understanding on the complexity and the potential of individual and collaborative sensemaking models for big data analytics.",
"title": ""
},
{
"docid": "ad5a8c3ee37219868d056b341300008e",
"text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.",
"title": ""
},
{
"docid": "78115381712dc06cdaeb91ef506e5e37",
"text": "Integration is a key step in utilizing advances in GaN technologies and enabling efficient switched-mode power conversion at very high frequencies (VHF). This paper addresses design and implementation of monolithic GaN half-bridge power stages with integrated gate drivers optimized for pulsewidth-modulated (PWM) dc-dc converters operating at 100 MHz switching frequency. Three gate-driver circuit topologies are considered for integration with half-bridge power stages in a 0.15-μm depletion-mode GaN-on-SiC process: an active pull-up driver, a bootstrapped driver, and a novel modified active pull-up driver. An analytical loss model is developed and used to optimize the monolithic GaN chips, which are then used to construct 20 V, 5 W, 100 MHz synchronous buck converter prototypes. With the bootstrapped and the modified pull-up gate-driver circuits, power stage efficiencies above 91% and total efficiencies close to 88% are demonstrated. The modified active pull-up driver, which offers 80% reduction in the driver area, is found to be the best-performing approach in the depletion-mode GaN process. These results demonstrate feasibility of high-efficiency VHF PWM dc-dc converters based on high levels of integration in GaN processes.",
"title": ""
},
{
"docid": "598dd39ec35921242b94f17e24b30389",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "0ec17619360b449543017274c9640aff",
"text": "Conventional horizontal evolutionary prototyping for small-data system development is inadequate and too expensive for identifying, analyzing, and mitigating risks in big data system development. RASP (Risk-Based, Architecture-Centric Strategic Prototyping) is a model for cost-effective, systematic risk management in agile big data system development. It uses prototyping strategically and only in areas that architecture analysis can't sufficiently address. Developers use less costly vertical evolutionary prototypes instead of blindly building full-scale prototypes. An embedded multiple-case study of nine big data projects at a global outsourcing firm validated RASP. A decision flowchart and guidelines distilled from lessons learned can help architects decide whether, when, and how to do strategic prototyping. This article is part of a special issue on Software Engineering for Big Data Systems.",
"title": ""
},
{
"docid": "7bbfafb6de6ccd50a4a708af76588beb",
"text": "In this paper we present a system for mobile augmented reality (AR) based on visual recognition. We split the tasks of recognizing an object and tracking it on the user's screen into a server-side and a client-side task, respectively. The capabilities of this hybrid client-server approach are demonstrated with a prototype application on the Android platform, which is able to augment both stationary (landmarks) and non stationary (media covers) objects. The database on the server side consists of hundreds of thousands of landmarks, which is crawled using a state of the art mining method for community photo collections. In addition to the landmark images, we also integrate a database of media covers with millions of items. Retrieval from these databases is done using vocabularies of local visual features. In order to fulfill the real-time constraints for AR applications, we introduce a method to speed-up geometric verification of feature matches. The client-side tracking of recognized objects builds on a multi-modal combination of visual features and sensor measurements. Here, we also introduce a motion estimation method, which is more efficient and precise than similar approaches. To the best of our knowledge this is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking.",
"title": ""
},
{
"docid": "c5e506e67a2f916742479e5be59370be",
"text": "Formal verification provides a high degree of confidence in safe system operation, but only if reality matches the verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true in Cyber-Physical Systems because high-fidelity physical models of systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments, but do not provide guarantees of safe operation. This paper presents an approach for provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification. Our main insight is that formal verification combined with verified runtime monitoring can ensure the safety of a learning agent. Verification results are preserved whenever learning agents limit exploration within the confounds of verified control choices as long as observed reality comports with the model used for off-line verification. When a model violation is detected, the agent abandons efficiency and instead attempts to learn a control strategy that guides the agent to a modeled portion of the state space. We prove that our approach toward incorporating knowledge about safe control into learning systems preserves safety guarantees, and demonstrate that we retain the empirical performance benefits provided by reinforcement learning. We also explore various points in the design space for these justified speculative controllers in a simple model of adaptive cruise control model for autonomous cars.",
"title": ""
},
{
"docid": "ca24d5e4308245c77c830eefdaf3fecd",
"text": "As technology and human-computer interaction advances, there is an increased interest in affective computing. One of the current challenges in computational speech and text processing is addressing affective and expressive meaning, an area that has received fairly sparse attention in linguistics. Linguistic investigation in this area is motivated both by the need for scientific study of subjective language phenomena, and by useful applications such as expressive text-to-speech synthesis. The study makes contributions to the study of affect and language, by describing a novel data resource, outlining models and challenges for exploring affect in language, applying computational methods toward this problem with included empirical results, and suggesting paths for further research. After the introduction, followed by a survey of several areas of related work in Chapter 2, Chapter 3 presents a newly developed sentence-annotated corpus resource divided into three parts for large-scale exploration of affect in texts (specifically tales). Besides covering annotation and data set description, the chapter includes a hierarchical affect model and a qualitative-interpretive examination suggesting characteristics of a subset of the data marked by high agreement in affective label assignments. Chapter 4 is devoted to experimental work on automatic affect prediction in text. Different computational methods are explored based on the labeled data set and affect hierarchy outlined in the previous chapter, with an emphasis on supervised machine learning whose results seem particularly interesting when including true affect history in the feature set. Moreover, besides contrasting classification accuracy of methods in isolation, methods’ predictions are combined with weighting approaches into a joint prediction. In addition, classification with the high agreement data is specifically explored, and the impact of access to knowledge about previous affect history is contrasted empirically. Chapter 5 moves on to discuss emotion in speech. It applies interactive evolutionary computation to evolve fundamental parameters of emotional prosody in perceptual experiments with human listeners, indicating both emotion-specific trends and types of variations, and implications at the local word-level. Chapter 6 provides suggestions for continued work in related and novel areas. A concluding chapter summarizes the dissertation and its contributions.",
"title": ""
},
{
"docid": "c4d16a752ccb6cd11989593604887960",
"text": "Normalizing flows and autoregressive models have been successfully combined to produce state-of-the-art results in density estimation, via Masked Autoregressive Flows (MAF) (Papamakarios et al., 2017), and to accelerate stateof-the-art WaveNet-based speech synthesis to 20x faster than real-time (Oord et al., 2017), via Inverse Autoregressive Flows (IAF) (Kingma et al., 2016). We unify and generalize these approaches, replacing the (conditionally) affine univariate transformations of MAF/IAF with a more general class of invertible univariate transformations expressed as monotonic neural networks. We demonstrate that the proposed neural autoregressive flows (NAF) are universal approximators for continuous probability distributions, and their greater expressivity allows them to better capture multimodal target distributions. Experimentally, NAF yields state-of-the-art performance on a suite of density estimation tasks and outperforms IAF in variational autoencoders trained on binarized MNIST. 1",
"title": ""
},
{
"docid": "bb799a3aac27f4ac764649e1f58ee9fb",
"text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.",
"title": ""
},
{
"docid": "72845c1eebbe683bfb91db2ddd5b0fee",
"text": "Sketch-based modeling strives to bring the ease and immediacy of drawing to the 3D world. However, while drawings are easy for humans to create, they are very challenging for computers to interpret due to their sparsity and ambiguity. We propose a data-driven approach that tackles this challenge by learning to reconstruct 3D shapes from one or more drawings. At the core of our approach is a deep convolutional neural network (CNN) that predicts occupancy of a voxel grid from a line drawing. This CNN provides an initial 3D reconstruction as soon as the user completes a single drawing of the desired shape. We complement this single-view network with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint. A key advantage of our approach is that we can apply the updater iteratively to fuse information from an arbitrary number of viewpoints, without requiring explicit stroke correspondences between the drawings. We train both CNNs by rendering synthetic contour drawings from hand-modeled shape collections as well as from procedurally-generated abstract shapes. Finally, we integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Conference’17, July 2017, Washington, DC, USA © 2018 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn This is the authors version of the work. It is posted by permission of ACM for your personal use. Not for redistribution. The definite version will be published in PACMCGIT.",
"title": ""
},
{
"docid": "256c0cbf8a89c92645154dde78e97d8e",
"text": "Linear algebra is one of the required mathematics courses for students majoring in computer science. With the small class sizes at our institution, we have the opportunity to use teaching strategies often reserved for senior level courses at larger universities. In this paper the authors discuss their experience with an innovative project schema that is designed for students in an elementary linear algebra course, and how it fullfills the requirements from the report to ACM [11].",
"title": ""
},
{
"docid": "609e906f454180a2ee29f940eba49afc",
"text": "Future 5G deployments will embrace a multitude of novel technologies that will significantly change the air interface, system architecture, and service delivery platforms. However, compared to previous migrations to next-generation technologies, the implementation of mobile networks will receive particular attention this time. The virtualization of network functionality, the application of open, standardized, and inter-operable software, as well as the use of commodity hardware will transform mobile-network technology. In this paper, we focus on the benefits, challenges, and limitations that accompany virtualization in 5G radio access networks (RANs). Within the context of virtualized RAN, we consider its implementation requirements and analyze its cost. We also outline the impact on standardization, which will continue to involve 3GPP but will engage new players whose inclusion in the discussion encourages novel implementation concepts.",
"title": ""
},
{
"docid": "12396d817f7170015fdbbb7a9179cd75",
"text": "This paper examines the generality of features extracted from heart rate (HR) and skin conductance (SC) signals as predictors of self-reported player affect expressed as pairwise preferences. Artificial neural networks are trained to accurately map physiological features to expressed affect in two dissimilar and independent game surveys. The performance of the obtained affective models which are trained on one game is tested on the unseen physiological and selfreported data of the other game. Results in this early study suggest that there exist features of HR and SC such as average HR and one and two-step SC variation that are able to predict affective states across games of different genre and dissimilar game mechanics.",
"title": ""
},
{
"docid": "7a4397cfa1e9d8bd5c7ba92a88bd0621",
"text": "The multidimensional knapsack problem (MKP) is a well-known NP-hard optimization problem. Various meta-heuristic methods are dedicated to solve this problem in literature. Recently a new meta-heuristic algorithm, called artificial algae algorithm (AAA), was presented, which has been successfully applied to solve various continuous optimization problems. However, due to its continuous nature, AAA cannot settle the discrete problem straightforwardly such as MKP. In view of this, this paper proposes a binary artificial algae algorithm (BAAA) to efficiently solve MKP. This algorithm is composed of discrete process, repair operators and elite local search. In discrete process, two logistic functions with different coefficients of curve are studied to achieve good discrete process results. Repair operators are performed to make the solution feasible and increase the efficiency. Finally, elite local search is introduced to improve the quality of solutions. To demonstrate the efficiency of our proposed algorithm, simulations and evaluations are carried out with total of 94 benchmark problems and compared with other bio-inspired state-of-the-art algorithms in the recent years including MBPSO, BPSOTVAC, CBPSOTVAC, GADS, bAFSA, and IbAFSA. The results show the superiority of BAAA to many compared existing algorithms.",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
},
{
"docid": "93388c2897ec6ec7141bcc820ab6734c",
"text": "We address the task of single depth image inpainting. Without the corresponding color images, previous or next frames, depth image inpainting is quite challenging. One natural solution is to regard the image as a matrix and adopt the low rank regularization just as color image inpainting. However, the low rank assumption does not make full use of the properties of depth images. A shallow observation inspires us to penalize the nonzero gradients by sparse gradient regularization. However, statistics show that though most pixels have zero gradients, there is still a non-ignorable part of pixels, whose gradients are small but nonzero. Based on this property of depth images, we propose a low gradient regularization method in which we reduce the penalty for small gradients while penalizing the nonzero gradients to allow for gradual depth changes. The proposed low gradient regularization is integrated with the low rank regularization into the low rank low gradient approach for depth image inpainting. We compare our proposed low gradient regularization with the sparse gradient regularization. The experimental results show the effectiveness of our proposed approach.",
"title": ""
}
] |
scidocsrr
|
19852e903954e1a2f614c147e134c4d9
|
Discovering Signals from Web Sources to Predict Cyber Attacks
|
[
{
"docid": "b45bb513f7bd9de4941785490945d53e",
"text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.",
"title": ""
}
] |
[
{
"docid": "6620d6177ed14871321314f307746d85",
"text": "Global software engineering increases coordination, communication, and control challenges in software development. The testing phase in this context is not a widely researched subject. In this paper, we study the outsourcing of software testing in the Oulu area, research the ways in which it is used, and determine the observable benefits and obstacles. The companies that participated in this study were found to use the outsourcing possibility of software testing with good efficiency and their testing process was considered to be mature. The most common benefits, in addition to the companies' cost savings, included the utilization of time zone differences for around-the-clock productivity, a closer proximity to the market, an improved record of communication and the tools that record the audit materials. The most commonly realized difficulties consisted of teamwork challenges, a disparate tool infrastructure, tool expense, and often-elevated coordination costs. We utilized in our study two matrices that consist in one dimension of the three distances, control, coordination, and communication, and in another dimension of four distances, temporal, geographical, socio-cultural and technical. The technical distance was our extension to the matrix that has been used as the basis for many other studies about global software development and outsourcing efforts. Our observations justify the extension of matrices with respect to the technical distance.",
"title": ""
},
{
"docid": "437d9a2146e05be85173b14176e4327c",
"text": "Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.",
"title": ""
},
{
"docid": "4620a43aeb0164cb3622159f07e30c51",
"text": "Recent advances in the development of genome editing technologies based on programmable nucleases have substantially improved our ability to make precise changes in the genomes of eukaryotic cells. Genome editing is already broadening our ability to elucidate the contribution of genetics to disease by facilitating the creation of more accurate cellular and animal models of pathological processes. A particularly tantalizing application of programmable nucleases is the potential to directly correct genetic mutations in affected tissues and cells to treat diseases that are refractory to traditional therapies. Here we discuss current progress toward developing programmable nuclease–based therapies as well as future prospects and challenges.",
"title": ""
},
{
"docid": "799be9729a01234c236431f5c754de8f",
"text": "This meta-analytic review of 42 studies covering 8,009 participants (ages 4-20) examines the relation of moral emotion attributions to prosocial and antisocial behavior. A significant association is found between moral emotion attributions and prosocial and antisocial behaviors (d = .26, 95% CI [.15, .38]; d = .39, 95% CI [.29, .49]). Effect sizes differ considerably across studies and this heterogeneity is attributed to moderator variables. Specifically, effect sizes for predicted antisocial behavior are larger for self-attributed moral emotions than for emotions attributed to hypothetical story characters. Effect sizes for prosocial and antisocial behaviors are associated with several other study characteristics. Results are discussed with respect to the potential significance of moral emotion attributions for the social behavior of children and adolescents.",
"title": ""
},
{
"docid": "64c156ee4171b5b84fd4eedb1d922f55",
"text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.",
"title": ""
},
{
"docid": "24fb823473adbf32ed726cfb2a239585",
"text": "First, we classify the selected customers into clusters using RFM model to identify high-profit, gold customers. Subsequently, we carry out data mining using association rules algorithm. We measure the similarity, difference and modified difference of mined association rules based on three rules, i.e. Emerging Patten Rule, Unexpected Change Rule, and Added/Perished Rule. In the meantime, we use rule matching threshold to derive all types of rules and explore the rules with significant change based on the degree of change measured. In this paper, we employ data mining tools and effectively discover the current spending pattern of customers and trends of behavioral change, which will allow management to detect in a large database potential changes of customer preference, and provide as early as possible products and services desired by the customers to expand the clientele base and prevent customer attrition.",
"title": ""
},
{
"docid": "f6a66ea4a5e8683bae76e71912694874",
"text": "We consider the task of learning visual connections between object categories using the ImageNet dataset, which is a large-scale dataset ontology containing more than 15 thousand object classes. We want to discover visual relationships between the classes that are currently missing (such as similar colors or shapes or textures). In this work we learn 20 visual attributes and use them both in a zero-shot transfer learning experiment as well as to make visual connections between semantically unrelated object categories.",
"title": ""
},
{
"docid": "4e91d37de7701e4a03c506c602ef3455",
"text": "This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.",
"title": ""
},
{
"docid": "1bc81463278f9efcd098c36dba8cafad",
"text": "T value of promotional marketing and word-of-mouth (WOM) is well recognized, but few studies have compared the effects of these two types of information in online settings. This research examines the effect of marketing efforts and online WOM on product sales by measuring the effects of online coupons, sponsored keyword search, and online reviews. It aims to understand the relationship between firms’ promotional marketing and WOM in the context of a third party review platform. Using a three-year panel data set from one of the biggest restaurant review websites in China, the study finds that both online promotional marketing and reviews have a significant impact on product sales, which suggests promotional marketing on third party review platforms is still an effective marketing tool. This research further explores the interaction effects between WOM and promotional marketing when these two types of information coexist. The results demonstrate a substitute relationship between the WOM volume and coupon offerings, but a complementary relationship between WOM volume and keyword advertising.",
"title": ""
},
{
"docid": "568bc5272373a4e3fd38304f2c381e0f",
"text": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.",
"title": ""
},
{
"docid": "d9df98fbd7281b67347df0f2643323fa",
"text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.",
"title": ""
},
{
"docid": "98689a2f03193a2fb5cc5195ef735483",
"text": "Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts (\\em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.",
"title": ""
},
{
"docid": "31be3d5db7d49d1bfc58c81efec83bdc",
"text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.",
"title": ""
},
{
"docid": "e0807a0ee11caa23207d3eb7da6c87b4",
"text": "Considering recent advancements and successes in the development of efficient quantum algorithms for electronic structure calculations-alongside impressive results using machine learning techniques for computation-hybridizing quantum computing with machine learning for the intent of performing electronic structure calculations is a natural progression. Here we report a hybrid quantum algorithm employing a restricted Boltzmann machine to obtain accurate molecular potential energy surfaces. By exploiting a quantum algorithm to help optimize the underlying objective function, we obtained an efficient procedure for the calculation of the electronic ground state energy for a small molecule system. Our approach achieves high accuracy for the ground state energy for H2, LiH, H2O at a specific location on its potential energy surface with a finite basis set. With the future availability of larger-scale quantum computers, quantum machine learning techniques are set to become powerful tools to obtain accurate values for electronic structures.",
"title": ""
},
{
"docid": "a429888416cd5c175f3fb2ac90350a06",
"text": "Recent years, Software Defined Routers (SDRs) (programmable routers) have emerged as a viable solution to provide a cost-effective packet processing platform with easy extensibility and programmability. Multi-core platforms significantly promote SDRs’ parallel computing capacities, enabling them to adopt artificial intelligent techniques, i.e., deep learning, to manage routing paths. In this paper, we explore new opportunities in packet processing with deep learning to inexpensively shift the computing needs from rule-based route computation to deep learning based route estimation for high-throughput packet processing. Even though deep learning techniques have been extensively exploited in various computing areas, researchers have, to date, not been able to effectively utilize deep learning based route computation for high-speed core networks. We envision a supervised deep learning system to construct the routing tables and show how the proposed method can be integrated with programmable routers using both Central Processing Units (CPUs) and Graphics Processing Units (GPUs). We demonstrate how our uniquely characterized input and output traffic patterns can enhance the route computation of the deep learning based SDRs through both analysis and extensive computer simulations. In particular, the simulation results demonstrate that our proposal outperforms the benchmark method in terms of delay, throughput, and signaling overhead.",
"title": ""
},
{
"docid": "ac8a0b4ad3f2905bc4e37fa4b0fcbe0a",
"text": "In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS’s operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.",
"title": ""
},
{
"docid": "2796be8f58164ea8ee9e6d7b2f431e59",
"text": "This paper introduces a new approach to database disk buffering, called the LRU-K method. The basic idea of LRU-K is to keep track of the times of the last K references to popular database pages, using this information to statistically estimate the interarrival times of references on a page by page basis. Although the LRU-K approach performs optimal statistical inference under relatively standard assumptions, it is fairly simple and incurs little bookkeeping overhead. As we demonstrate with simulation experiments, the LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages. In fact, LRU-K can approach the behavior of buffering algorithms in which page sets with known access frequencies are manually assigned to different buffer pools of specifically tuned sizes. Unlike such customized buffering algorithms however, the LRU-K method is self-tuning, and does not rely on external hints about workload characteristics. Furthermore, the LRU-K algorithm adapts in real time to changing patterns of access.",
"title": ""
},
{
"docid": "788ea4ece8631c81366e571eb205739f",
"text": "ABSTgACT. Tree pattern matching is an interesting special problem which occurs as a crucial step m a number of programmmg tasks, for instance, design of interpreters for nonprocedural programming languages, automatic implementations of abstract data types, code optimization m compilers, symbohc computation, context searching in structure editors, and automatic theorem provmg. As with the sorting problem, the variations in requirements and resources for each application seem to preclude a uniform, umversal solution to the tree-pattern-matching problem. Instead, a collection of well-analyzed techmques, from which specific applications may be selected and adapted, should be sought. Five new techniques for tree pattern matching are presented, analyzed for time and space complexity, and compared with previously known methods. Particularly important are applications where the same patterns are matched against many subjects and where a subject may be modified incrementally Therefore, methods which spend some tune preprocessmg patterns in order to improve the actual matching time are included",
"title": ""
},
{
"docid": "98110985cd175f088204db452a152853",
"text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.",
"title": ""
},
{
"docid": "43e9fbaedf062a67be3c51b99889a6fb",
"text": "A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy.",
"title": ""
}
] |
scidocsrr
|
203e0b31bd5df016c17c3432859ac584
|
An iterative improvement process for agile software development
|
[
{
"docid": "17ae594d70605bc24fb3b2e4a63e5d78",
"text": "Mobile phones have been closed environments until recent years. The change brought by open platform technologies such as the Symbian operating system and Java technologies has opened up a significant business opportunity for anyone to develop application software such as games for mobile terminals. However, developing mobile applications is currently a challenging task due to the specific demands and technical constraints of mobile development. Furthermore, at the moment very little is known about the suitability of the different development processes for mobile application development. Due to these issues, we have developed an agile development approach called Mobile-D. The Mobile-D approach is briefly outlined here and the experiences gained from four case studies are discussed.",
"title": ""
}
] |
[
{
"docid": "d29240eb204f634472ab2e0635c8c885",
"text": "Department of Information Technology and Decision Sciences, College of Business and Public Administration, Old Dominion University Nortfolk, VA, U.S.A.; Department of Statistics and Computer Information Systems, Zicklin School of Business, Baruch College, City University of New York, New York, NY, U.S.A.; Department of Management Science and Information Systems, College of Management, University of Massachusetts Boston, Boston, MA, U.S.A.; Board of Advisors Professor of Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA, U.S.A.",
"title": ""
},
{
"docid": "ee4b8d8e9fdc77ce3f8278f0563d8638",
"text": "A data breakpoint associates debugging actions with programmer-specified conditions on the memory state of an executing program. Data breakpoints provide a means for discovering program bugs that are tedious or impossible to isolate using control breakpoints alone. In practice, programmers rarely use data breakpoints, because they are either unimplemented or prohibitively slow in available debugging software. In this paper, we present the design and implementation of a practical data breakpoint facility.\nA data breakpoint facility must monitor all memory updates performed by the program being debugged. We implemented and evaluated two complementary techniques for reducing the overhead of monitoring memory updates. First, we checked write instructions by inserting checking code directly into the program being debugged. The checks use a segmented bitmap data structure that minimizes address lookup complexity. Second, we developed data flow algorithms that eliminate checks on some classes of write instructions but may increase the complexity of the remaining checks.\nWe evaluated these techniques on the SPARC using the SPEC benchmarks. Checking each write instruction using a segmented bitmap achieved an average overhead of 42%. This overhead is independent of the number of breakpoints in use. Data flow analysis eliminated an average of 79% of the dynamic write checks. For scientific programs such the NAS kernels, analysis reduced write checks by a factor of ten or more. On the SPARC these optimizations reduced the average overhead to 25%.",
"title": ""
},
{
"docid": "2280049bb99f82739f1007566627bd94",
"text": "As is well known antenna size implies limitations in bandwidth and gain characteristics. However, if non-Foster / Negative circuits are employed such restrictions can be partially overcome. That is negative matching circuits can provide a way to construct very small efficient antennas. Non-Foster circuits have been considered since 1950's when Negative Impedance Converters (NIC) were proposed [1]. NIC's can be realized via a combination of active devices (transistors) as well as lumped capacitors and inductors. Still, inherent losses, noise issues and bandwidth limitations from available active elements have precluded their use except for narrow bandwidth applications. But, recent advancements in integrated circuits have renewed interest in non-Foster/negative matching.",
"title": ""
},
{
"docid": "150c458df57685b78b0cc02953c98ff7",
"text": "The CRISPR-associated protein Cas9 is an RNA-guided endonuclease that cleaves double-stranded DNA bearing sequences complementary to a 20-nucleotide segment in the guide RNA. Cas9 has emerged as a versatile molecular tool for genome editing and gene expression control. RNA-guided DNA recognition and cleavage strictly require the presence of a protospacer adjacent motif (PAM) in the target DNA. Here we report a crystal structure of Streptococcus pyogenes Cas9 in complex with a single-molecule guide RNA and a target DNA containing a canonical 5′-NGG-3′ PAM. The structure reveals that the PAM motif resides in a base-paired DNA duplex. The non-complementary strand GG dinucleotide is read out via major-groove interactions with conserved arginine residues from the carboxy-terminal domain of Cas9. Interactions with the minor groove of the PAM duplex and the phosphodiester group at the +1 position in the target DNA strand contribute to local strand separation immediately upstream of the PAM. These observations suggest a mechanism for PAM-dependent target DNA melting and RNA–DNA hybrid formation. Furthermore, this study establishes a framework for the rational engineering of Cas9 enzymes with novel PAM specificities.",
"title": ""
},
{
"docid": "e1066f3b7ff82667dbc7186f357dd406",
"text": "Generative adversarial networks (GANs) are becoming increasingly popular for image processing tasks. Researchers have started using GAN s for speech enhancement, but the advantage of using the GAN framework has not been established for speech enhancement. For example, a recent study reports encouraging enhancement results, but we find that the architecture of the generator used in the GAN gives better performance when it is trained alone using the $L_1$ loss. This work presents a new GAN for speech enhancement, and obtains performance improvement with the help of adversarial training. A deep neural network (DNN) is used for time-frequency mask estimation, and it is trained in two ways: regular training with the $L_1$ loss and training using the GAN framework with the help of an adversary discriminator. Experimental results suggest that the GAN framework improves speech enhancement performance. Further exploration of loss functions, for speech enhancement, suggests that the $L_1$ loss is consistently better than the $L_2$ loss for improving the perceptual quality of noisy speech.",
"title": ""
},
{
"docid": "cb100f526d1eacdde05c7275ff330a24",
"text": "PURPOSE\nThe ability to focus and sustain one's attention is critical for learning. Children with autism demonstrate unusual characteristics of attention from infancy. It is reasonable to assume that early anomalies in attention influence a child's developmental trajectories. Therapeutic interventions for autism often focus on core features of autism such as communication and socialization, while very few interventions specifically address attention. The purpose of this article is to provide clinicians a description of attention characteristics in children with autism and discuss interventions thought to improve attention.\n\n\nMETHOD\nCharacteristics of attention in children with autism are presented. Intervention studies featuring measures of attention as an outcome variable for young children with autism are reviewed to present interventions that have empirical evidence for improvements in attention. Results are synthesized by strategy, specific feature of attention targeted, and results for both habilitative goals and accommodations for attention.\n\n\nCONCLUSION\nAlthough research is not extensive, several strategies to support attention in young children with autism have been investigated. The empirical findings regarding these strategies can inform evidence-based practice.",
"title": ""
},
{
"docid": "fdbe390730b949ccaa060a84257af2f1",
"text": "An increase in the prevalence of chronic disease has led to a rise in the demand for primary healthcare services in many developed countries. Healthcare technology tools may provide the leverage to alleviate the shortage of primary care providers. Here we describe the development and usage of an automated healthcare kiosk for the management of patients with stable chronic disease in the primary care setting. One-hundred patients with stable chronic disease were recruited from a primary care clinic. They used a kiosk in place of doctors’ consultations for two subsequent follow-up visits. Patient and physician satisfaction with kiosk usage were measured on a Likert scale. Kiosk blood pressure measurements and triage decisions were validated and optimized. Patients were assessed if they could use the kiosk independently. Patients and physicians were satisfied with all areas of kiosk usage. Kiosk triage decisions were accurate by the 2nd month of the study. Blood pressure measurements by the kiosk were equivalent to that taken by a nurse (p = 0.30, 0.14). Independent kiosk usage depended on patients’ language skills and educational levels. Healthcare kiosks represent an alternative way to manage patients with stable chronic disease. They have the potential to replace physician visits and improve access to primary healthcare. Patients welcome the use of healthcare technology tools, including those with limited literacy and education. Optimization of environmental and patient factors may be required prior to the implementation of kiosk-based technology in the healthcare setting.",
"title": ""
},
{
"docid": "92ed9b25fd8ac724dae55642f249f48e",
"text": "This paper aims to critically review the existing l iterature on the relationship between Corporate Governance, in particular board diversity , and its influence on both Corporate Social Responsibility (CSR) and Corporate Social Re sponsibility Reporting (CSRR) and suggest some important avenues for future research in this field. Assuming that both CSR and CSRR are outcomes of boards’ decisions, this pa per proposes that examining boards’ decision making process with regard to CSR would pr ovide more insight into the link between board diversity and CSR. Particularly, the paper stresses the importance of studies linking gender diversity and CSR decision making pr ocesses which is quite rare in the existing literature. It also identifies some of th e important methodological drawbacks in the previous literature and highlights the importan ce of rigorous qualitative methods and longitudinal studies for the development of underst anding of the diversity-CSR relationship.",
"title": ""
},
{
"docid": "7dde491c895d8c8ee852521a09b0117b",
"text": "The Ad hoc On-Demand Distance Vector (AODV) routing protocol is designed for use in ad hoc mobile networks. Because of t he difficulty of testing an ad hoc routing protocol in a real-world environme nt, a simulation was first created so that the protocol design could be tested i n a variety of scenarios. Once simulation of the protocol was nearly complete , the simulation was used as the basis for an implementation in the Linux opera ting system. In the course of converting the simulation into an implement ation, certain modifications were needed in AODV and the Linux kernel due to b th simplifications made in the simulation of AODV and to incompatib ilities of the Linux kernel and the IP-layer to routing in a mobile environment. This paper details many of the changes that were necessary during th e development of the implementation.",
"title": ""
},
{
"docid": "4f3e37db8d656fe1e746d6d3a37878b5",
"text": "Shorter product life cycles and aggressive marketing, among other factors, have increased the complexity of sales forecasting. Forecasts are often produced using a Forecasting Support System that integrates univariate statistical forecasting with managerial judgment. Forecasting sales under promotional activity is one of the main reasons to use expert judgment. Alternatively, one can replace expert adjustments by regression models whose exogenous inputs are promotion features (price, display, etc.). However, these regression models may have large dimensionality as well as multicollinearity issues. We propose a novel promotional model that overcomes these limitations. It combines Principal Component Analysis to reduce the dimensionality of the problem and automatically identifies the demand dynamics. For items with limited history, the proposed model is capable of providing promotional forecasts by selectively pooling information across established products. The performance of the model is compared against forecasts provided by experts and statistical benchmarks, on weekly data; outperforming both substantially.",
"title": ""
},
{
"docid": "1aa7e7fe70bdcbc22b5d59b0605c34e9",
"text": "Surgical tasks are complex multi-step sequences of smaller subtasks (often called surgemes) and it is useful to segment task demonstrations into meaningful subsequences for:(a) extracting finite-state machines for automation, (b) surgical training and skill assessment, and (c) task classification. Existing supervised methods for task segmentation use segment labels from a dictionary of motions to build classifiers. However, as the datasets become voluminous, the labeling becomes arduous and further, this method doesnt́ generalize to new tasks that dont́ use the same dictionary. We propose an unsupervised semantic task segmentation framework by learning “milestones”, ellipsoidal regions of the position and feature states at which a task transitions between motion regimes modeled as locally linear. Milestone learning uses a hierarchy of Dirichlet Process Mixture Models, learned through Expectation-Maximization, to cluster the transition points and optimize the number of clusters. It leverages transition information from kinematic state as well as environment state such as visual features. We also introduce a compaction step which removes repetitive segments that correspond to a mid-demonstration failure recovery by retrying an action. We evaluate Milestones Learning on three surgical subtasks: pattern cutting, suturing, and needle passing. Initial results suggest that our milestones qualitatively match manually annotated segmentation. While one-to-one correspondence of milestones with annotated data is not meaningful, the milestones recovered from our method have exactly one annotated surgeme transition in 74% (needle passing) and 66% (suturing) of total milestones, indicating a semantic match.",
"title": ""
},
{
"docid": "c9df206d8c0bc671f3109c1c7b12b149",
"text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.",
"title": ""
},
{
"docid": "016ba468269a1693cb49005712e00d52",
"text": "In 2011, Google released a one-month production trace with hundreds of thousands of jobs running across over 12,000 heterogeneous hosts. In order to perform in-depth research based on the trace, it is necessary to construct a close-to-practice simulation system. In this paper, we devise a distributed cloud simulator (or toolkit) based on virtual machines, with three important features. (1) The dynamic changing resource amounts (such as CPU rate and memory size) consumed by the reproduced jobs can be emulated as closely as possible to the real values in the trace. (2) Various types of events (e.g., kill/evict event) can be emulated precisely based on the trace. (3) Our simulation toolkit is able to emulate more complex and useful cases beyond the original trace to adapt to various research demands. We evaluate the system on a real cluster environment with 16×8=128 cores and 112 virtual machines (VMs) constructed by XEN hypervisor. To the best of our knowledge, this is the first work to reproduce Google cloud environment with real experimental system setting and real-world large scale production trace. Experiments show that our simulation system could effectively reproduce the real checkpointing/restart events based on Google trace, by leveraging Berkeley Lab Checkpoint/Restart (BLCR) tool. It can simultaneously process up to 1200 emulated Google jobs over the 112 VMs. Such a simulation toolkit has been released as a GNU GPL v3 software for free downloading, and it has been successfully applied to the fundamental research on the optimization of checkpoint intervals for Google tasks. Copyright c ⃝ 2013 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "2f7a15b3d922d9a1d03a6851be5f6622",
"text": "The clinical relevance of T cells in the control of a diverse set of human cancers is now beyond doubt. However, the nature of the antigens that allow the immune system to distinguish cancer cells from noncancer cells has long remained obscure. Recent technological innovations have made it possible to dissect the immune response to patient-specific neoantigens that arise as a consequence of tumor-specific mutations, and emerging data suggest that recognition of such neoantigens is a major factor in the activity of clinical immunotherapies. These observations indicate that neoantigen load may form a biomarker in cancer immunotherapy and provide an incentive for the development of novel therapeutic approaches that selectively enhance T cell reactivity against this class of antigens.",
"title": ""
},
{
"docid": "fbb76049d6192e4571ede961f1e413a8",
"text": "We present ongoing work on a gold standard annotation of German terminology in an inhomogeneous domain. The text basis is thematically broad and contains various registers, from expert text to user-generated data taken from an online discussion forum. We identify issues related with these properties, and show our approach how to model the domain. Futhermore, we present our approach to handle multiword terms, including discontinuous ones. Finally, we evaluate the annotation quality.",
"title": ""
},
{
"docid": "cd29357697fafb5aa5b66807f746b682",
"text": "Autonomous path planning algorithms are significant to planetary exploration rovers, since relying on commands from Earth will heavily reduce their efficiency of executing exploration missions. This paper proposes a novel learning-based algorithm to deal with global path planning problem for planetary exploration rovers. Specifically, a novel deep convolutional neural network with double branches (DB-CNN) is designed and trained, which can plan path directly from orbital images of planetary surfaces without implementing environment mapping. Moreover, the planning procedure requires no prior knowledge about planetary surface terrains. Finally, experimental results demonstrate that DBCNN achieves better performance on global path planning and faster convergence during training compared with the existing Value Iteration Network (VIN).",
"title": ""
},
{
"docid": "0d9340dc849332af5854380fa460cfd5",
"text": "Many scientific datasets archive a large number of variables over time. These timeseries data streams typically track many variables over relatively long periods of time, and therefore are often both wide and deep. In this paper, we describe the Visual Query Language (VQL) [3], a technology for locating time series patterns in historical or real time data. The user interactively specifies a search pattern, VQL finds similar shapes, and returns a ranked list of matches. VQL supports both univariate and multivariate queries, and allows the user to interactively specify the the quality of the match, including temporal warping, amplitude warping, and temporal constraints between features.",
"title": ""
},
{
"docid": "3b54c700cf096551d8064e2c84aeea2f",
"text": "Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval.",
"title": ""
},
{
"docid": "6a7bc6a1f1d9486304edac87635dc0e9",
"text": "We exploit the falloff of acuity in the visual periphery to accelerate graphics computation by a factor of 5-6 on a desktop HD display (1920x1080). Our method tracks the user's gaze point and renders three image layers around it at progressively higher angular size but lower sampling rate. The three layers are then magnified to display resolution and smoothly composited. We develop a general and efficient antialiasing algorithm easily retrofitted into existing graphics code to minimize \"twinkling\" artifacts in the lower-resolution layers. A standard psychophysical model for acuity falloff assumes that minimum detectable angular size increases linearly as a function of eccentricity. Given the slope characterizing this falloff, we automatically compute layer sizes and sampling rates. The result looks like a full-resolution image but reduces the number of pixels shaded by a factor of 10-15.\n We performed a user study to validate these results. It identifies two levels of foveation quality: a more conservative one in which users reported foveated rendering quality as equivalent to or better than non-foveated when directly shown both, and a more aggressive one in which users were unable to correctly label as increasing or decreasing a short quality progression relative to a high-quality foveated reference. Based on this user study, we obtain a slope value for the model of 1.32-1.65 arc minutes per degree of eccentricity. This allows us to predict two future advantages of foveated rendering: (1) bigger savings with larger, sharper displays than exist currently (e.g. 100 times speedup at a field of view of 70° and resolution matching foveal acuity), and (2) a roughly linear (rather than quadratic or worse) increase in rendering cost with increasing display field of view, for planar displays at a constant sharpness.",
"title": ""
},
{
"docid": "31fe8edc8fa4d336801a4ab8d1d2d5f2",
"text": "In this paper we describe our system for SemEval-2018 Task 7 on classification of semantic relations in scientific literature for clean (subtask 1.1) and noisy data (subtask 1.2). We compare two models for classification, a C-LSTM which utilizes only word embeddings and an SVM that also takes handcrafted features into account. To adapt to the domain of science we train word embeddings on scientific papers collected from arXiv.org. The hand-crafted features consist of lexical features to model the semantic relations as well as the entities between which the relation holds. Classification of Relations using Embeddings (ClaiRE) achieved an F1 score of 74.89% for the first subtask and 78.39% for the second.",
"title": ""
}
] |
scidocsrr
|
da7b071b1d73ee2a3a4d5b34ec12408d
|
Beyond natives and immigrants: exploring types of net generation students
|
[
{
"docid": "cf5128cb4259ea87027ddd00189dc931",
"text": "This paper interrogates the currently pervasive discourse of the ‘net generation’ finding the concept of the ‘digital native’ especially problematic, both empirically and conceptually. We draw on a research project of South African higher education students’ access to and use of Information and Communication Technologies (ICTs) to show that age is not a determining factor in students’ digital lives; rather, their familiarity and experience using ICTs is more relevant. We also demonstrate that the notion of a generation of ‘digital natives’ is inaccurate: those with such attributes are effectively a digital elite. Instead of a new net generation growing up to replace an older analogue generation, there is a deepening digital divide in South Africa characterized not by age but by access and opportunity; indeed, digital apartheid is alive and well. We suggest that the possibility for digital democracy does exist in the form of a mobile society which is not age specific, and which is ubiquitous. Finally, we propose redefining the concepts ‘digital’, ‘net’, ‘native’, and ‘generation’ in favour of reclaiming the term ‘digitizen’.",
"title": ""
}
] |
[
{
"docid": "4a240b05fbb665596115841d238a483b",
"text": "BACKGROUND\nAttachment theory is one of the most important achievements of contemporary psychology. Role of medical students in the community health is important, so we need to know about the situation of happiness and attachment style in these students.\n\n\nOBJECTIVES\nThis study was aimed to assess the relationship between medical students' attachment styles and demographic characteristics.\n\n\nMATERIALS AND METHODS\nThis cross-sectional study was conducted on randomly selected students of Medical Sciences in Kurdistan University, in 2012. To collect data, Hazan and Shaver's attachment style measure and the Oxford Happiness Questionnaire were used. The results were analyzed using the SPSS software version 16 (IBM, Chicago IL, USA) and statistical analysis was performed via t-test, Chi-square test, and multiple regression tests.\n\n\nRESULTS\nSecure attachment style was the most common attachment style and the least common was ambivalent attachment style. Avoidant attachment style was more common among single persons than married people (P = 0.03). No significant relationship was observed between attachment style and gender and grade point average of the studied people. The mean happiness score of students was 62.71. In multivariate analysis, the variables of secure attachment style (P = 0.001), male gender (P = 0.005), and scholar achievement (P = 0.047) were associated with higher happiness score.\n\n\nCONCLUSION\nThe most common attachment style was secure attachment style, which can be a positive prognostic factor in medical students, helping them to manage stress. Higher frequency of avoidant attachment style among single persons, compared with married people, is mainly due to their negative attitude toward others and failure to establish and maintain relationships with others.",
"title": ""
},
{
"docid": "9b54c1afe7b7324aa61fe4c2d1a49342",
"text": "This work presents a pass-type ultrawideband power detector MMICs designed for operation from 10 MHz to 50 GHz in a wide dynamic range from -40 dBm to +25 dBm which were fabricated using GaAs zero bias diode process. Directional and non-directional detector designes are reviwed. For good wideband matching with transmission line, bonding wires parameters were taken into account at the stage of MMIC design. Result of this work includes on-wafer measurements of MMICs S-parameters and transfer characteristics.",
"title": ""
},
{
"docid": "85221954ced857c449acab8ee5cf801e",
"text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.",
"title": ""
},
{
"docid": "45c515da4f8e9c383f6d4e0fa6e09192",
"text": "In this paper, we demonstrate our Img2UML system tool. This system tool eliminates the gap between pixel-based diagram and engineering model, that it supports the extraction of the UML class model from images and produces an XMI file of the UML model. In addition to this, Img2UML offers a repository of UML class models of images that have been collected from the Internet. This project has both industrial and academic aims: for industry, this tool proposals a method that enables the updating of software design documentation (that typically contains UML images). For academia, this system unlocks a corpus of UML models that are publicly available, but not easily analyzable for scientific studies.",
"title": ""
},
{
"docid": "71b59076bf36de415c5cf6b86cec165f",
"text": "Most existing structure from motion (SFM) approaches for unordered images cannot handle multiple instances of the same structure in the scene. When image pairs containing different instances are matched based on visual similarity, the pairwise geometric relations as well as the correspondences inferred from such pairs are erroneous, which can lead to catastrophic failures in the reconstruction. In this paper, we investigate the geometric ambiguities caused by the presence of repeated or duplicate structures and show that to disambiguate between multiple hypotheses requires more than pure geometric reasoning. We couple an expectation maximization (EM)-based algorithm that estimates camera poses and identifies the false match-pairs with an efficient sampling method to discover plausible data association hypotheses. The sampling method is informed by geometric and image-based cues. Our algorithm usually recovers the correct data association, even in the presence of large numbers of false pairwise matches.",
"title": ""
},
{
"docid": "d338c807948016bf978aa7a03841f292",
"text": "Emotions accompany everyone in the daily life, playing a key role in non-verbal communication, and they are essential to the understanding of human behavior. Emotion recognition could be done from the text, speech, facial expression or gesture. In this paper, we concentrate on recognition of “inner” emotions from electroencephalogram (EEG) signals as humans could control their facial expressions or vocal intonation. The need and importance of the automatic emotion recognition from EEG signals has grown with increasing role of brain computer interface applications and development of new forms of human-centric and human-driven interaction with digital media. We propose fractal dimension based algorithm of quantification of basic emotions and describe its implementation as a feedback in 3D virtual environments. The user emotions are recognized and visualized in real time on his/her avatar adding one more so-called “emotion dimension” to human computer interfaces.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "c481baeab2091672c044c889b1179b1f",
"text": "Our research is based on an innovative approach that integrates computational thinking and creative thinking in CS1 to improve student learning performance. Referencing Epstein's Generativity Theory, we designed and deployed a suite of creative thinking exercises with linkages to concepts in computer science and computational thinking, with the premise that students can leverage their creative thinking skills to \"unlock\" their understanding of computational thinking. In this paper, we focus on our study on differential impacts of the exercises on different student populations. For all students there was a linear \"dosage effect\" where completion of each additional exercise increased retention of course content. The impacts on course grades, however, were more nuanced. CS majors had a consistent increase for each exercise, while non-majors benefited more from completing at least three exercises. It was also important for freshmen to complete all four exercises. We did find differences between women and men but cannot draw conclusions.",
"title": ""
},
{
"docid": "57df6e1fcd71458e774a5492e8a370de",
"text": "Due to the phenomenal growth of online product reviews, sentiment analysis (SA) has gained huge attention, for example, by online service providers. A number of benchmark datasets for a wide range of domains have been made available for sentiment analysis, especially in resource-rich languages. In this paper we assess the challenges of SA in Hindi by providing a benchmark setup, where we create an annotated dataset of high quality, build machine learning models for sentiment analysis in order to show the effective usage of the dataset, and finally make the resource available to the community for further advancement of research. The dataset comprises of Hindi product reviews crawled from various online sources. Each sentence of the review is annotated with aspect term and its associated sentiment. As classification algorithms we use Conditional Random Filed (CRF) and Support Vector Machine (SVM) for aspect term extraction and sentiment analysis, respectively. Evaluation results show the average F-measure of 41.07% for aspect term extraction and accuracy of 54.05% for sentiment classification.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "957a3970611470b611c024ed3b558115",
"text": "SHARE is a unique panel database of micro data on health, socio-economic status and social and family networks covering most of the European Union and Israel. To date, SHARE has collected three panel waves (2004, 2006, 2010) of current living circumstances and retrospective life histories (2008, SHARELIFE); 6 additional waves are planned until 2024. The more than 150 000 interviews give a broad picture of life after the age of 50 years, measuring physical and mental health, economic and non-economic activities, income and wealth, transfers of time and money within and outside the family as well as life satisfaction and well-being. The data are available to the scientific community free of charge at www.share-project.org after registration. SHARE is harmonized with the US Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA) and has become a role model for several ageing surveys worldwide. SHARE's scientific power is based on its panel design that grasps the dynamic character of the ageing process, its multidisciplinary approach that delivers the full picture of individual and societal ageing, and its cross-nationally ex-ante harmonized design that permits international comparisons of health, economic and social outcomes in Europe and the USA.",
"title": ""
},
{
"docid": "87199b3e7def1db3159dc6b5989638aa",
"text": "We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"title": ""
},
{
"docid": "1dfbe95e53aeae347c2b42ef297a859f",
"text": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "cae4703a50910c7718284c6f8230a4bc",
"text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.",
"title": ""
},
{
"docid": "4ec6229ae75b13bbcc429f07eda0fb4a",
"text": "Face detection is a well-explored problem. Many challenges on face detectors like extreme pose, illumination, low resolution and small scales are studied in the previous work. However, previous proposed models are mostly trained and tested on good-quality images which are not always the case for practical applications like surveillance systems. In this paper, we first review the current state-of-the-art face detectors and their performance on benchmark dataset FDDB, and compare the design protocols of the algorithms. Secondly, we investigate their performance degradation while testing on low-quality images with different levels of blur, noise, and contrast. Our results demonstrate that both hand-crafted and deep-learning based face detectors are not robust enough for low-quality images. It inspires researchers to produce more robust design for face detection in the wild.",
"title": ""
},
{
"docid": "64f4c53592f185020bece88d4adf3ea4",
"text": "Due to the well-known limitations of diffusion tensor imaging, high angular resolution diffusion imaging (HARDI) is used to characterize non-Gaussian diffusion processes. One approach to analyzing HARDI data is to model the apparent diffusion coefficient (ADC) with higher order diffusion tensors. The diffusivity function is positive semidefinite. In the literature, some methods have been proposed to preserve positive semidefiniteness of second order and fourth order diffusion tensors. None of them can work for arbitrarily high order diffusion tensors. In this paper, we propose a comprehensive model to approximate the ADC profile by a positive semidefinite diffusion tensor of either second or higher order. We call this the positive semidefinite diffusion tensor (PSDT) model. PSDT is a convex optimization problem with a convex quadratic objective function constrained by the nonnegativity requirement on the smallest Z-eigenvalue of the diffusivity function. The smallest Z-eigenvalue is a computable measure of the extent of positive definiteness of the diffusivity function. We also propose some other invariants for the ADC profile analysis. Experiment results show that higher order tensors could improve the estimation of anisotropic diffusion and that the PSDT model can depict the characterization of diffusion anisotropy which is consistent with known neuroanatomy.",
"title": ""
},
{
"docid": "9efa07624d538272a5da844c74b2f56d",
"text": "Electronic health records (EHRs), digitization of patients’ health record, offer many advantages over traditional ways of keeping patients’ records, such as easing data management and facilitating quick access and real-time treatment. EHRs are a rich source of information for research (e.g. in data analytics), but there is a risk that the published data (or its leakage) can compromise patient privacy. The k-anonymity model is a widely used privacy model to study privacy breaches, but this model only studies privacy against identity disclosure. Other extensions to mitigate existing limitations in k-anonymity model include p-sensitive k-anonymity model, p+-sensitive k-anonymity model, and (p, α)-sensitive k-anonymity model. In this paper, we point out that these existing models are inadequate in preserving the privacy of end users. Specifically, we identify situations where p+sensitive k-anonymity model is unable to preserve the privacy of individuals when an adversary can identify similarities among the categories of sensitive values. We term such attack as Categorical Similarity Attack (CSA). Thus, we propose a balanced p+-sensitive k-anonymity model, as an extension of the p+-sensitive k-anonymity model. We then formally analyze the proposed model using High-Level Petri Nets (HLPN) and verify its properties using SMT-lib and Z3 solver.We then evaluate the utility of release data using standard metrics and show that our model outperforms its counterparts in terms of privacy vs. utility tradeoff. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d2ed4a8558c9ec9f794abd3cc22678e3",
"text": "Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce dynamic data selection for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call gradual fine-tuning, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline.",
"title": ""
},
{
"docid": "55eb5594f05319c157d71361880f1983",
"text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.",
"title": ""
}
] |
scidocsrr
|
7cc67a094f9fe7d473f40fd910fa7b81
|
FAST PENUMBRA CALCULATION IN RAY TRACING
|
[
{
"docid": "d2a0ff28b7163203a03be27977b9b425",
"text": "The various types of shadows are characterized. Most existing shadow algorithms are described, and their complexities, advantages, and shortcomings are discussed. Hard shadows, soft shadows, shadows of transparent objects, and shadows for complex modeling primitives are considered. For each type, shadow algorithms within various rendering techniques are examined. The aim is to provide readers with enough background and insight on the various methods to allow them to choose the algorithm best suited to their needs and to help identify the areas that need more research and point to possible solutions.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "033f498664f001ebd7cf894df1c8b2c6",
"text": "Recently, acoustic models based on deep neural notworks (DNNs) have been introduced and showed dramatic improvements over acoustic models based on GMM in a variety of tasks. In this paper, we considered the improvement of noise robustness of DNN. Inspired by Missing Feature Theory and static noise aware training, we proposed an approach that uses a noise-suppressed acoustic feature and estimated noise information as input of DNN. We used simple Spectral Subtraction as noise-suppression. As noise estimation, we used estimation per utterance or frame. In noisy speech recognition experiments, we compared the proposed method with other methods and the proposed method showed the superior performance than the other approaches. For noise estimation per utterance with log Mel Filterbank, we obtained 28.6% word error rate reduction compared with multi condition training, 5.9% reduction compared with noise adaptive training.",
"title": ""
},
{
"docid": "91cb2ee27517441704bf739ee811d6c6",
"text": "The primo vascular system has a specific anatomical and immunohistochemical signature that sets it apart from the arteriovenous and lymphatic systems. With immune and endocrine functions, the primo vascular system has been found to play a large role in biological processes, including tissue regeneration, inflammation, and cancer metastases. Although scientifically confirmed in 2002, the original discovery was made in the early 1960s by Bong-Han Kim, a North Korean scientist. It would take nearly 40 years after that discovery for scientists to revisit Kim's research to confirm the early findings. The presence of primo vessels in and around blood and lymph vessels, nerves, viscera, and fascia, as well as in the brain and spinal cord, reveals a common link that could potentially open novel possibilities of integration with cranial, lymphatic, visceral, and fascial approaches in manual medicine.",
"title": ""
},
{
"docid": "ab1c7ede012bd20f30bab66fcaec49fa",
"text": "Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.",
"title": ""
},
{
"docid": "9237b82f1d127ab59a1a5e8f9fa7f86c",
"text": "Purpose: Enterprise social media platforms provide new ways of sharing knowledge and communicating within organizations to benefit from the social capital and valuable knowledge that employees have. Drawing on social dilemma and self‐determination theory, the aim of the study is to understand what factors drive employees’ participation and what factors hamper their participation in enterprise social media. Methodology: Based on a literature review, a unified research model is derived integrating demographic, individual, organizational and technological factors that influence the motivation of employees to share knowledge. The model is tested using statistical methods on a sample of 114 respondents in Denmark. Qualitative data is used to elaborate and explain quantitative results‘ findings. Practical implications: The proposed knowledge sharing framework helps to understand what factors impact engagement on social media. Furthermore the article suggests different types of interventions to overcome the social dilemma of knowledge sharing. Findings: Our findings pinpoint towards the general drivers and barriers to knowledge sharing within organizations. The significant drivers are: enjoy helping others, monetary rewards, management support, change of knowledge sharing behavior and recognition. The significant identified barriers to knowledge sharing are: change of behavior, lack of trust and lack of time. Originality: The study contributes to an understanding of factors leading to the success or failure of enterprise social media drawing on self‐determination and social dilemma theory.",
"title": ""
},
{
"docid": "6ed698be633b69022cea6c845774c564",
"text": "Grounds for thinking that the model described in the previous paper can be used to support general biological principles of social evolution are briefly discussed. Two principles are presented, the first concerning the evolution of social behaviour in general and the second the evolution of social discrimination. Some tentative evidence is given. More general application of the theory in biology is then discussed, particular attention being given to cases where the indicated interpretation differs from previous views and to cases which appear anomalous. A hypothesis is outlined concerning social evolution in the Hymenoptera; but the evidence that at present exists is found somewhat contrary on certain points. Other subjects considered include warning behaviour, the evolution of distasteful properties in insects, clones of cells and clones of zooids as contrasted with other types of colonies, the confinement of parental care to true offspring in birds and insects, fights, the behaviour of parasitoid insect larvae within a host, parental care in connection with monogyny and monandry and multi-ovulate ovaries in plants in connection with wind and insect pollination.",
"title": ""
},
{
"docid": "19359356fe18c5ca4028696c145001dd",
"text": "Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add, Kulisch accumulation and tapered encodings from Gustafson’s posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9% top-1 and 0.2% top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8/38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96× the power and 1.12× the area of 8/32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59× the power and 0.68× the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well.",
"title": ""
},
{
"docid": "97c5b202cdc1f7d8220bf83663a0668f",
"text": "Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes. Majority of models are based on low-level visual features and the importance of top-down factors has not yet been fully explored or modeled. Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and learn a direct mapping from those features to eye fixations using Regression, SVM, and AdaBoost classifiers. By extensive experimenting over three benchmark eye-tracking datasets using three popular evaluation scores, we show that our boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction. Furthermore, our model successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.",
"title": ""
},
{
"docid": "e27da58188be54b71187d3489fa6b4e7",
"text": "In a prospective-longitudinal study of a representative birth cohort, we tested why stressful experiences lead to depression in some people but not in others. A functional polymorphism in the promoter region of the serotonin transporter (5-HT T) gene was found to moderate the influence of stressful life events on depression. Individuals with one or two copies of the short allele of the 5-HT T promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than individuals homozygous for the long allele. This epidemiological study thus provides evidence of a gene-by-environment interaction, in which an individual's response to environmental insults is moderated by his or her genetic makeup.",
"title": ""
},
{
"docid": "0a3feaa346f4fd6bfc0bbda6ba92efc6",
"text": "We present Magic Finger, a small device worn on the fingertip, which supports always-available input. Magic Finger inverts the typical relationship between the finger and an interactive surface: with Magic Finger, we instrument the user's finger itself, rather than the surface it is touching. Magic Finger senses touch through an optical mouse sensor, enabling any surface to act as a touch screen. Magic Finger also senses texture through a micro RGB camera, allowing contextual actions to be carried out based on the particular surface being touched. A technical evaluation shows that Magic Finger can accurately sense 22 textures with an accuracy of 98.9%. We explore the interaction design space enabled by Magic Finger, and implement a number of novel interaction techniques that leverage its unique capabilities.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "2f0eadac10692e6de896539d558e9f82",
"text": "Communication disorders occur across the lifespan and encompass a wide range of conditions that interfere with individuals' abilities to hear (e.g., hearing loss), speak (e.g., voice disorders; motor speech disorders), and/or use language (e.g., specific language impairment; aphasia) to meet their communication needs. Such disorders often compromise the social, recreational, emotional, educational, and vocational aspects of an individual's life. This research examines the development and implementation of new software that facilitates multi-syllabic speech production in children with autism and speech delays. The VocSyl software package utilizes a suite of audio visualizations that represent a myriad of audio features in abstract representations. The goal of these visualizations is to provide children with language impairments a new persistent modality in which to experience and practice speech-language skills.",
"title": ""
},
{
"docid": "aa6a22096c633072b1e362f20e18a4e4",
"text": "In this paper, we propose a new deep framework which predicts facial attributes and leverage it as a soft modality to improve face identification performance. Our model is an end to end framework which consists of a convolutional neural network (CNN) whose output is fanned out into two separate branches; the first branch predicts facial attributes while the second branch identifies face images. Contrary to the existing multi-task methods which only use a shared CNN feature space to train these two tasks jointly, we fuse the predicted attributes with the features from the face modality in order to improve the face identification performance. Experimental results show that our model brings benefits to both face identification as well as facial attribute prediction performance, especially in the case of identity facial attributes such as gender prediction. We tested our model on two standard datasets annotated by identities and face attributes. Experimental results indicate that the proposed model outperforms most of the current existing face identification and attribute prediction methods.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "da5e40683054b89d619712c31a3384e5",
"text": "The Los Angeles Smart Grid Project aims to use informatics techniques to bring about a quantum leap in the way demand response load optimization is performed in utilities. Semantic information integration, from sources as diverse as Internet-connected smart meters and social networks, is a linchpin to support the advanced analytics and mining algorithms required for this. In association with it, semantic complex event processing system will allow consumer and utility managers to easily specify and enact energy policies continuously. We present the information systems architecture for the project that is under development, and discuss research issues that emerge from having to design a system that supports 1.4 million customers and a rich ecosystem of Smart Grid applications from users, third party vendors, the utility and regulators.",
"title": ""
},
{
"docid": "369e5fb60d3afc993821159b64bc3560",
"text": "For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system population, storage capacity and consumption, and degree of file modification. We present a generative model that explains the namespace structure and the distribution of directory sizes. We find significant temporal trends relating to the popularity of certain file types, the origin of file content, the way the namespace is used, and the degree of variation among file systems, as well as more pedestrian changes in size and capacities. We give examples of consequent lessons for designers of file systems and related software.",
"title": ""
},
{
"docid": "2c7d17ca22881cdd3e462706fd34d168",
"text": "Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77% in linkprediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, typeconstraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We also show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "5c2f115e0159d15a87904e52879c1abf",
"text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.",
"title": ""
},
{
"docid": "7d0fb12fce0ef052684a8664a3f5c543",
"text": "In this paper, we consider a finite-horizon Markov decision process (MDP) for which the objective at each stage is to minimize a quantile-based risk measure (QBRM) of the sequence of future costs; we call the overall objective a dynamic quantile-based risk measure (DQBRM). In particular, we consider optimizing dynamic risk measures where the one-step risk measures are QBRMs, a class of risk measures that includes the popular value at risk (VaR) and the conditional value at risk (CVaR). Although there is considerable theoretical development of risk-averse MDPs in the literature, the computational challenges have not been explored as thoroughly. We propose datadriven and simulation-based approximate dynamic programming (ADP) algorithms to solve the risk-averse sequential decision problem. We address the issue of inefficient sampling for risk applications in simulated settings and present a procedure, based on importance sampling, to direct samples toward the “risky region” as the ADP algorithm progresses. Finally, we show numerical results of our algorithms in the context of an application involving risk-averse bidding for energy storage.",
"title": ""
}
] |
scidocsrr
|
1e88f3ffa1b3ff4a9bb30324fd5d228d
|
Features of promising technologies for pretreatment of lignocellulosic biomass.
|
[
{
"docid": "a1396e55c7dcb0e84703d52ba2149fd9",
"text": "■ Abstract Ethanol made from lignocellulosic biomass sources, such as agricultural and forestry residues and herbaceous and woody crops, provides unique environmental, economic, and strategic benefits. Through sustained research funding, primarily by the U.S. Department of Energy, the estimated cost of biomass ethanol production has dropped from∼$4.63/gallon in 1980 to∼$1.22/gallon today, and it is now potentially competitive for blending with gasoline. Advances in pretreatment by acid-catalyzed hemicellulose hydrolysis and enzymes for cellulose breakdown coupled with recent development of genetically engineered bacteria that ferment all five sugars in biomass to ethanol at high yields have been the key to reducing costs. However, through continued advances in accessing the cellulose and hemicellulose fractions, the cost of biomass ethanol can be reduced to the point at which it is competitive as a pure fuel without subsidies. A major challenge to realizing the great benefits of biomass ethanol remains to substantially reduce the risk of commercializing first-ofa-kind technology, and greater emphasis on developing a fundamental understanding of the technology for biomass conversion to ethanol would reduce application costs and accelerate commercialization. Teaming of experts to cooperatively research key processing steps would be a particularly powerful and effective approach to meeting these needs.",
"title": ""
}
] |
[
{
"docid": "64acb2d16c23f2f26140c0bce1785c9b",
"text": "Physical forces of gravity, hemodynamic stresses, and movement play a critical role in tissue development. Yet, little is known about how cells convert these mechanical signals into a chemical response. This review attempts to place the potential molecular mediators of mechanotransduction (e.g. stretch-sensitive ion channels, signaling molecules, cytoskeleton, integrins) within the context of the structural complexity of living cells. The model presented relies on recent experimental findings, which suggests that cells use tensegrity architecture for their organization. Tensegrity predicts that cells are hard-wired to respond immediately to mechanical stresses transmitted over cell surface receptors that physically couple the cytoskeleton to extracellular matrix (e.g. integrins) or to other cells (cadherins, selectins, CAMs). Many signal transducing molecules that are activated by cell binding to growth factors and extracellular matrix associate with cytoskeletal scaffolds within focal adhesion complexes. Mechanical signals, therefore, may be integrated with other environmental signals and transduced into a biochemical response through force-dependent changes in scaffold geometry or molecular mechanics. Tensegrity also provides a mechanism to focus mechanical energy on molecular transducers and to orchestrate and tune the cellular response.",
"title": ""
},
{
"docid": "9ca90172c5beff5922b4f5274ef61480",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.",
"title": ""
},
{
"docid": "a0251ae10bfabd188766aa2453b8cebb",
"text": "This paper presents the development of automatic vehicle plate detection system using image processing technique. The famous name for this system is Automatic Number Plate Recognition (ANPR). Automatic vehicle plate detection system is commonly used in field of safety and security systems especially in car parking area. Beside the safety aspect, this system is applied to monitor road traffic such as the speed of vehicle and identification of the vehicle's owner. This system is designed to assist the authorities in identifying the stolen vehicle not only for car but motorcycle as well. In this system, the Optical Character Recognition (OCR) technique was the prominent technique employed by researchers to analyse image of vehicle plate. The limitation of this technique was the incapability of the technique to convert text or data accurately. Besides, the characters, the background and the size of the vehicle plate are varied from one country to other country. Hence, this project proposes a combination of image processing technique and OCR to obtain the accurate vehicle plate recognition for vehicle in Malaysia. The outcome of this study is the system capable to detect characters and numbers of vehicle plate in different backgrounds (black and white) accurately. This study also involves the development of Graphical User Interface (GUI) to ease user in recognizing the characters and numbers in the vehicle or license plates.",
"title": ""
},
{
"docid": "d8828a6cafcd918cd55b1782629b80e0",
"text": "For deep-neural-network (DNN) processors [1-4], the product-sum (PS) operation predominates the computational workload for both convolution (CNVL) and fully-connect (FCNL) neural-network (NN) layers. This hinders the adoption of DNN processors to on the edge artificial-intelligence (AI) devices, which require low-power, low-cost and fast inference. Binary DNNs [5-6] are used to reduce computation and hardware costs for AI edge devices; however, a memory bottleneck still remains. In Fig. 31.5.1 conventional PE arrays exploit parallelized computation, but suffer from inefficient single-row SRAM access to weights and intermediate data. Computing-in-memory (CIM) improves efficiency by enabling parallel computing, reducing memory accesses, and suppressing intermediate data. Nonetheless, three critical challenges remain (Fig. 31.5.2), particularly for FCNL. We overcome these problems by co-optimizing the circuits and the system. Recently, researches have been focusing on XNOR based binary-DNN structures [6]. Although they achieve a slightly higher accuracy, than other binary structures, they require a significant hardware cost (i.e. 8T-12T SRAM) to implement a CIM system. To further reduce the hardware cost, by using 6T SRAM to implement a CIM system, we employ binary DNN with 0/1-neuron and ±1-weight that was proposed in [7]. We implemented a 65nm 4Kb algorithm-dependent CIM-SRAM unit-macro and in-house binary DNN structure (focusing on FCNL with a simplified PE array), for cost-aware DNN AI edge processors. This resulted in the first binary-based CIM-SRAM macro with the fastest (2.3ns) PS operation, and the highest energy-efficiency (55.8TOPS/W) among reported CIM macros [3-4].",
"title": ""
},
{
"docid": "8d9246e7780770b5f7de9ef0adbab3e6",
"text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.",
"title": ""
},
{
"docid": "6cd301f1b6ffe64f95b7d63eb0356a87",
"text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.",
"title": ""
},
{
"docid": "a667360d5214a47efee3326536a95527",
"text": "In this paper we propose a method for automatic color extraction and indexing to support color queries of image and video databases. This approach identifies the regions within images that contain colors from predetermined color sets. By searching over a large number of color sets, a color index for the database is created in a fashion similar to that for file inversion. This allows very fast indexing of the image collection by color contents of the images. Furthermore, information about the identified regions, such as the color set, size, and location, enables a rich variety of queries that specify both color content and spatial relationships of regions. We present the single color extraction and indexing method and contrast it to other color approaches. We examine single and multiple color extraction and image query on a database of 3000 color images.",
"title": ""
},
{
"docid": "4ef797ee3961528ec3bed66b2ddac452",
"text": "WiFi offloading is envisioned as a promising solution to the mobile data explosion problem in cellular networks. WiFi offloading for moving vehicles, however, poses unique characteristics and challenges, due to high mobility, fluctuating mobile channels, etc. In this paper, we focus on the problem of WiFi offloading in vehicular communication environments. Specifically, we discuss the challenges and identify the research issues related to drive-thru Internet access and effectiveness of vehicular WiFi offloading. Moreover, we review the state-of-the-art offloading solutions, in which advanced vehicular communications can be employed. We also shed some lights on the path for future research on this topic.",
"title": ""
},
{
"docid": "b1d85112f8a14e1ec28a6a64a03e7ec0",
"text": "This article reports the results of a survey of Chief Information Officers (CIOs) from Fortune 1000 companies on their perceptions of the critical success factors in Enterprise Resource Planning (ERP) implementation. Through a review of the literature, 11 critical success factors were identified , with underlying subfactors, for successful ERP implementation. The degree of criticality of each of these factors were assessed in a survey administered to the CIOs. The 5 most critical factors identified by the CIOs were top management support, project champion, ERP teamwork and composition, project management, and change management program and culture. The importance of each of these factors is discussed.",
"title": ""
},
{
"docid": "0c90537f2b470354c2328c567e053ee2",
"text": "BACKGROUND\nCombination antiplatelet therapy with clopidogrel and aspirin may reduce the rate of recurrent stroke during the first 3 months after a minor ischemic stroke or transient ischemic attack (TIA). A trial of combination antiplatelet therapy in a Chinese population has shown a reduction in the risk of recurrent stroke. We tested this combination in an international population.\n\n\nMETHODS\nIn a randomized trial, we assigned patients with minor ischemic stroke or high-risk TIA to receive either clopidogrel at a loading dose of 600 mg on day 1, followed by 75 mg per day, plus aspirin (at a dose of 50 to 325 mg per day) or the same range of doses of aspirin alone. The dose of aspirin in each group was selected by the site investigator. The primary efficacy outcome in a time-to-event analysis was the risk of a composite of major ischemic events, which was defined as ischemic stroke, myocardial infarction, or death from an ischemic vascular event, at 90 days.\n\n\nRESULTS\nA total of 4881 patients were enrolled at 269 international sites. The trial was halted after 84% of the anticipated number of patients had been enrolled because the data and safety monitoring board had determined that the combination of clopidogrel and aspirin was associated with both a lower risk of major ischemic events and a higher risk of major hemorrhage than aspirin alone at 90 days. Major ischemic events occurred in 121 of 2432 patients (5.0%) receiving clopidogrel plus aspirin and in 160 of 2449 patients (6.5%) receiving aspirin plus placebo (hazard ratio, 0.75; 95% confidence interval [CI], 0.59 to 0.95; P=0.02), with most events occurring during the first week after the initial event. Major hemorrhage occurred in 23 patients (0.9%) receiving clopidogrel plus aspirin and in 10 patients (0.4%) receiving aspirin plus placebo (hazard ratio, 2.32; 95% CI, 1.10 to 4.87; P=0.02).\n\n\nCONCLUSIONS\nIn patients with minor ischemic stroke or high-risk TIA, those who received a combination of clopidogrel and aspirin had a lower risk of major ischemic events but a higher risk of major hemorrhage at 90 days than those who received aspirin alone. (Funded by the National Institute of Neurological Disorders and Stroke; POINT ClinicalTrials.gov number, NCT00991029 .).",
"title": ""
},
{
"docid": "9fd82750a7d46911670ba8842a7978c2",
"text": "Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent’s competence. In continual learning, also referred to as lifelong learning, there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.",
"title": ""
},
{
"docid": "c3bf7e7556dba69d4e3ff40e6b40be17",
"text": "A frequency-domain parametric study using generalized consistent transmitting boundaries has been performed to evaluate the significance of topographic effects on the seismic response of steep slopes. The results show that the peak amplification of motion at the crest of a slope occurs at a normalized frequency 1t/2 = 0.2, where H is the slope height and 2 is the wavelength of the motion. The importance of the natural site frequency is illustrated by the analysis of a stepped layer over a half-space. It was found that the natural frequency of the region behind the crest can dominate the response, relative to the topographic effect, for the conditions studied. Moreover, the effect of topography can be handled separately from the amplification due to the natural frequency of the deposit behind the crest of the slope. This concept of separating the amplification caused by topography from that caused by the natural frequency is advantageous to the development of a simplified method to estimate topographic effects.",
"title": ""
},
{
"docid": "41bb0bf78e7c9ced5cdbc10f8afa22e3",
"text": "Mobile apps continue to consume increasing amounts of sensitive data, such as banking credentials and classified documents. At the same time, the number of smartphone thefts is increasing at a rapid speed. As a result, there is an imperative need to protect sensitive data on lost or stolen mobile devices. In this work, we develop a practical solution to protect sensitive data on mobile devices. Our solution enables adaptive protection by pro-actively stepping up or stepping down data security based on perceived contextual risk of the device. We realize our solution for the Android platform in the form of a system called AppShell. AppShell does not require root privilege, nor need any modification to the underlying framework, and hence is a ready-to-deploy solution. It supports both in-memory and on-disk data protection by transparently encrypting the data, and discarding the encryption key, when required, for enhanced protection. We implement a working prototype of AppShell and evaluate it against several popular Android apps. Our results show that AppShell can successfully protect sensitive data in the lost devices with a reasonable performance overhead.",
"title": ""
},
{
"docid": "d96c9204c552181e4d00ed961b18c665",
"text": "We present a new tool, named DART, for automatically testing software that combines three main techniques: (1) automated extraction of the interface of a program with its external environment using static source-code parsing; (2) automatic generation of a test driver for this interface that performs random testing to simulate the most general environment the program can operate in; and (3) dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths. Together, these three techniques constitute Directed Automated Random Testing, or DART for short. The main strength of DART is thus that testing can be performed completely automatically on any program that compiles -- there is no need to write any test driver or harness code. During testing, DART detects standard errors such as program crashes, assertion violations, and non-termination. Preliminary experiments to unit test several examples of C programs are very encouraging.",
"title": ""
},
{
"docid": "10124ea154b8704c3a6aaec7543ded57",
"text": "Tomato bacterial wilt and canker, caused by Clavibacter michiganensis subsp. michiganensis (Cmm) is considered one of the most important bacterial diseases of tomato worldwide. During the last two decades, severe outbreaks have occurred in greenhouses in the horticultural belt of Buenos Aires-La Plata, Argentina. Cmm strains collected in this area over a period of 14 years (2000–2013) were characterized for genetic diversity by rep-PCR genomic fingerprinting and level of virulence in order to have a better understanding of the source of inoculum and virulence variability. Analyses of BOX-, ERIC- and REP-PCR fingerprints revealed that the strains were genetically diverse; the same three fingerprint types were obtained in all three cases. No relationship could be established between rep-PCR clustering and the year, location or greenhouse origin of isolates, which suggests different sources of inoculum. However, in a few cases, bacteria with identical fingerprint types were isolated from the same greenhouse in different years. Despite strains differing in virulence, particularly within BOX-PCR groups, putative virulence genes located in plasmids (celA, pat-1) or in a pathogenicity island in the chromosome (tomA, chpC, chpG and ppaA) were detected in all strains. Our results suggest that new strains introduced every year via seed importation might be coexisting with others persisting locally. This study highlights the importance of preventive measures to manage tomato bacterial wilt and canker.",
"title": ""
},
{
"docid": "06bfa716dd067d05229c92dc66757772",
"text": "Although many critics are reluctant to accept the trustworthiness of qualitative research, frameworks for ensuring rigour in this form of work have been in existence for many years. Guba’s constructs, in particular, have won considerable favour and form the focus of this paper. Here researchers seek to satisfy four criteria. In addressing credibility, investigators attempt to demonstrate that a true picture of the phenomenon under scrutiny is being presented. To allow transferability, they provide sufficient detail of the context of the fieldwork for a reader to be able to decide whether the prevailing environment is similar to another situation with which he or she is familiar and whether the findings can justifiably be applied to the other setting. The meeting of the dependability criterion is difficult in qualitative work, although researchers should at least strive to enable a future investigator to repeat the study. Finally, to achieve confirmability, researchers must take steps to demonstrate that findings emerge from the data and not their own predispositions. The paper concludes by suggesting that it is the responsibility of research methods teachers to ensure that this or a comparable model for ensuring trustworthiness is followed by students undertaking a qualitative inquiry.",
"title": ""
},
{
"docid": "9b7ff8a7dec29de5334f3de8d1a70cc3",
"text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.",
"title": ""
},
{
"docid": "a26089c56be9fc140acc47086964ad5a",
"text": "Module integrated converters (MICs) have been under rapid development for single-phase grid-tied photovoltaic applications. The capacitive energy storage implementation for the double-line-frequency power variation represents a differentiating factor among existing designs. This paper introduces a new topology that places the energy storage block in a series-connected path with the line interface block. This design provides independent control over the capacitor voltage, soft-switching for all semiconductor devices, and the full four-quadrant operation with the grid. The proposed approach is analyzed and experimentally demonstrated.",
"title": ""
},
{
"docid": "4636e3ade7c3bdc73ca29f9e74ec870c",
"text": "For many organizations, Information Technology (IT) enabled business initiatives and IT infrastructure constitute major investments that, if not managed properly, may impair rather than enhance the organization's competitive position. Especially since the advent of Sarbanes–Oxley (SOX), both management and IT professionals are concerned with design, implementation, and assessment of IT governance strategies to ensure that technology truly serves the needs of the business. Via an in-depth study within one organisation, this research explores the factors influencing IT governance structures, processes, and outcome metrics. Interview responses to open-ended questions indicated that more effective IT governance performance outcomes are associated with a shared understanding of business and IT objectives; active involvement of IT steering committees; a balance of business and IT representatives in IT decisions; and comprehensive and well-communicated IT strategies and policies. IT governance also plays a prominent role in fostering project success and delivering business value. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "55eb8b24baa00c38534ef0020c682fff",
"text": "NoSQL databases are designed to manage large volumes of data. Although they do not require a default schema associated with the data, they are categorized by data models. Because of this, data organization in NoSQL databases needs significant design decisions because they affect quality requirements such as scalability, consistency and performance. In traditional database design, on the logical modeling phase, a conceptual schema is transformed into a schema with lower abstraction and suitable to the target database data model. In this context, the contribution of this paper is an approach for logical design of NoSQL document databases. Our approach consists in a process that converts a conceptual modeling into efficient logical representations for a NoSQL document database. Workload information is considered to determine an optimized logical schema, providing a better access performance for the application. We evaluate our approach through a case study in the e-commerce domain and demonstrate that the NoSQL logical structure generated by our approach reduces the amount of items accessed by the application queries.",
"title": ""
}
] |
scidocsrr
|
ef57a24af8e661a4b218c82b45f9e753
|
Logarithmic Time One-Against-Some
|
[
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
}
] |
[
{
"docid": "49e8c5d0aac226bbd5c81d467e632c4f",
"text": "After decades of study, automatic face detection and recognition systems are now accurate and widespread. Naturally, this means users who wish to avoid automatic recognition are becoming less able to do so. Where do we stand in this cat-and-mouse race? We currently live in a society where everyone carries a camera in their pocket. Many people willfully upload most or all of the pictures they take to social networks which invest heavily in automatic face recognition systems. In this setting, is it still possible for privacy-conscientious users to avoid automatic face detection and recognition? If so, how? Must evasion techniques be obvious to be effective, or are there still simple measures that users can use to protect themselves? In this work, we find ways to evade face detection on Facebook, a representative example of a popular social network that uses automatic face detection to enhance their service. We challenge widely-held beliefs about evading face detection: do our old techniques such as blurring the face region or wearing \"privacy glasses\" still work? We show that in general, state-of-the-art detectors can often find faces even if the subject wears occluding clothing or even if the uploader damages the photo to prevent faces from being detected.",
"title": ""
},
{
"docid": "36fd3787b4e6a0c4c308cd4201ab0345",
"text": "Program analysis is fundamental for program optimizations, debugging, and many other tasks. But developing program analyses has been a challenging and error-prone process for general users. Declarative program analysis has shown the promise to dramatically improve the productivity in the development of program analyses. Current declarative program analysis is however subject to some major limitations in supporting cooperations among analysis tools, guiding program optimizations, and often requires much effort for repeated program preprocessing. In this work, we advocate the integration of ontology into declarative program analysis. As a way to standardize the definitions of concepts in a domain and the representation of the knowledge in the domain, ontology offers a promising way to address the limitations of current declarative program analysis. We develop a prototype framework named PATO for conducting program analysis upon ontology-based program representation. Experiments on six program analyses confirm the potential of ontology for complementing existing declarative program analysis. It supports multiple analyses without separate program preprocessing, promotes cooperative Liveness analysis between two compilers, and effectively guides some program optimizations.",
"title": ""
},
{
"docid": "4faef20f6f8807f500b0a555f0f0ed2b",
"text": "Online search and item recommendation systems are often based on being able to correctly label items with topical keywords. Typically, topical labelers analyze the main text associated with the item, but social media posts are often multimedia in nature and contain contents beyond the main text. Topic labeling for social media posts is therefore an important open problem for supporting effective social media search and recommendation. In this work, we present a novel solution to this problem for Google+ posts, in which we integrated a number of different entity extractors and annotators, each responsible for a part of the post (e.g. text body, embedded picture, video, or web link). To account for the varying quality of different annotator outputs, we first utilized crowdsourcing to measure the accuracy of individual entity annotators, and then used supervised machine learning to combine different entity annotators based on their relative accuracy. Evaluating using a ground truth data set, we found that our approach substantially outperforms topic labels obtained from the main text, as well as naive combinations of the individual annotators. By accurately applying topic labels according to their relevance to social media posts, the results enables better search and item recommendation.",
"title": ""
},
{
"docid": "194c1a9a16ee6dad00c41544fca74371",
"text": "Computers are not (yet?) capable of being reasonable any more than is a Second Lieutenant. Against stupidity, the Gods themselves contend in vain. Banking systems include the back-end bookkeeping systems that record customers' account details and transaction processing systems such as cash machine networks and high-value interbank money transfer systems that feed them with data. They are important for a number of reasons. First, bookkeeping was for many years the main business of the computer industry, and banking was its most intensive area of application. Personal applications such as Netscape and Powerpoint might now run on more machines, but accounting is still the critical application for the average business. So the protection of bookkeeping systems is of great practical importance. It also gives us a well-understood model of protection in which confidentiality plays almost no role, but where the integrity of records (and their immutability once made) is of paramount importance. Second, transaction processing systems—whether for small debits such as $50 cash machine withdrawals or multimillion-dollar wire transfers—were the applications that launched commercial cryptography. Banking applications drove the development not just of encryption algorithms and protocols, but also of the supporting technologies, such as tamper-resistant cryptographic processors. These processors provide an important and interesting example of a trusted computing base that is quite different from",
"title": ""
},
{
"docid": "bf445955186e2f69f4ef182850090ffc",
"text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "1db42d9d65737129fa08a6ad4d52d27e",
"text": "This study introduces a unique prototype system for structural health monitoring (SHM), SmartSync, which uses the building’s existing Internet backbone as a system of virtual instrumentation cables to permit modular and largely plug-and-play deployments. Within this framework, data streams from distributed heterogeneous sensors are pushed through network interfaces in real time and seamlessly synchronized and aggregated by a centralized server, which performs basic data acquisition, event triggering, and database management while also providing an interface for data visualization and analysis that can be securely accessed. The system enables a scalable approach to monitoring tall and complex structures that can readily interface a variety of sensors and data formats (analog and digital) and can even accommodate variable sampling rates. This study overviews the SmartSync system, its installation/operation in theworld’s tallest building, Burj Khalifa, and proof-of-concept in triggering under dual excitations (wind and earthquake).DOI: 10.1061/(ASCE)ST.1943-541X.0000560. © 2013 American Society of Civil Engineers. CE Database subject headings: High-rise buildings; Structural health monitoring; Wind loads; Earthquakes. Author keywords: Tall buildings; Structural health monitoring; System identification.",
"title": ""
},
{
"docid": "8ca6e0b5c413cc228af0d64ce8cf9d3b",
"text": "On January 8, a Database Column reader asked for our views on new distributed database research efforts, and we'll begin here with our views on MapReduce. This is a good time to discuss it, since the recent trade press has been filled with news of the revolution of so-called \"cloud computing.\" This paradigm entails harnessing large numbers of (low-end) processors working in parallel to solve a computing problem. In effect, this suggests constructing a data center by lining up a large number of \"jelly beans\" rather than utilizing a much smaller number of high-end servers.",
"title": ""
},
{
"docid": "57f5b00d796489b7f5caee701ce3116b",
"text": "SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driver’s throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs. Drawing on our recent experience in developing an SR-IOV capable networking solution for the Xen hypervisor we discuss the system level requirements and techniques for SR-IOV enablement on the platform. We discuss PCI configuration considerations, direct MMIO, interrupt handling and DMA into an HVM using an IOMMU (I/O Memory Management Unit). We then explain the architectural, design and implementation considerations for SR-IOV networking in Xen in which the Physical Function has a driver running in the driver domain that serves as a “master” and each Virtual Function exposed to a guest VM has its own virtual driver.",
"title": ""
},
{
"docid": "c508f62dfd94d3205c71334638790c54",
"text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).",
"title": ""
},
{
"docid": "5c30ecda39e41e2b32659e12c9585ba6",
"text": "We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.",
"title": ""
},
{
"docid": "2096fff6f862603ea86f51185697066f",
"text": "OBJECTIVE\nThis report provides an overview of marital and cohabiting relationships in the United States among men and women aged 15-44 in 2002, by a variety of characteristics. National estimates are provided that highlight formal and informal marital status, previous experience with marriage and cohabitation, the sequencing of marriage and cohabitation, and the stability of cohabitations and marriages.\n\n\nMETHODS\nThe analyses presented in this report are based on a nationally representative sample of 12,571 men and women aged 15-44 living in households in the United States in 2002, based on the National Survey of Family Growth, Cycle 6.\n\n\nRESULTS\nOver 40% of men and women aged 15-44 were currently married at the date of interview, compared with about 9% who were currently cohabiting. Men and women were, however, likely to cohabit prior to becoming married. Marriages were longer lasting than cohabiting unions; about 78% of marriages lasted 5 years or more, compared with less than 30% of cohabitations. Cohabitations were shorter-lived than marriages in part because about half of cohabitations transitioned to marriage within 3 years. Variations--often large variations-in marital and cohabiting relationships and durations were found by race and Hispanic origin, education, family background, and other factors.",
"title": ""
},
{
"docid": "5f344817b225363f5309208909619306",
"text": "Semantic specialization is a process of finetuning pre-trained distributional word vectors using external lexical knowledge (e.g., WordNet) to accentuate a particular semantic relation in the specialized vector space. While post-processing specialization methods are applicable to arbitrary distributional vectors, they are limited to updating only the vectors of words occurring in external lexicons (i.e., seen words), leaving the vectors of all other words unchanged. We propose a novel approach to specializing the full distributional vocabulary. Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space. We exploit words seen in the resources as training examples for learning a global specialization function. This function is learned by combining a standard L2-distance loss with a adversarial loss: the adversarial component produces more realistic output vectors. We show the effectiveness and robustness of the proposed method across three languages and on three tasks: word similarity, dialog state tracking, and lexical simplification. We report consistent improvements over distributional word vectors and vectors specialized by other state-of-the-art specialization frameworks. Finally, we also propose a cross-lingual transfer method for zero-shot specialization which successfully specializes a full target distributional space without any lexical knowledge in the target language and without any bilingual data.",
"title": ""
},
{
"docid": "d489bd0fbf14fdad30b5a59190c86078",
"text": "This research investigates two competing hypotheses from the literature: 1) the Social Enhancement (‘‘Rich Get Richer’’) hypothesis that those more popular offline augment their popularity by increasing it on Facebook , and 2) the ‘‘Social Compensation’’ (‘‘Poor Get Richer’’) hypothesis that users attempt to increase their Facebook popularity to compensate for inadequate offline popularity. Participants (n= 614) at a large, urban university in the Midwestern United States completed an online survey. Results are that a subset of users, those more extroverted and with higher self-esteem, support the Social Enhancement hypothesis, being more popular both offline and on Facebook . Another subset of users, those less popular offline, support the Social Compensation hypotheses because they are more introverted, have lower self-esteem and strive more to look popular on Facebook . Semantic network analysis of open-ended responses reveals that these two user subsets also have different meanings for offline and online popularity. Furthermore, regression explains nearly twice the variance in offline popularity as in Facebook popularity, indicating the latter is not as socially grounded or defined as offline popularity.",
"title": ""
},
{
"docid": "81ec51ca319ab957c0e951c9de31859c",
"text": "Photography has been striving to capture an ever increasing amount of visual information in a single image. Digital sensors, however, are limited to recording a small subset of the desired information at each pixel. A common approach to overcoming the limitations of sensing hardware is the optical multiplexing of high-dimensional data into a photograph. While this is a well-studied topic for imaging with color filter arrays, we develop a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function. This framework unifies a wide variety of existing approaches to analyze and reconstruct multiplexed data in either the spatial or the frequency domain. We demonstrate many practical applications of our framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.",
"title": ""
},
{
"docid": "5a9b5313575208b0bdf8ffdbd4e271f5",
"text": "A new method for the design of predictive controllers for SISO systems is presented. The proposed technique allows uncertainties and constraints to be concluded in the design of the control law. The goal is to design, at each sample instant, a predictive feedback control law that minimizes a performance measure and guarantees of constraints are satisfied for a set of models that describes the system to be controlled. The predictive controller consists of a finite horizon parametric-optimization problem with an additional constraint over the manipulated variable behavior. This is an end-constraint based approach that ensures the exponential stability of the closed-loop system. The inclusion of this additional constraint, in the on-line optimization algorithm, enables robust stability properties to be demonstrated for the closedloop system. This is the case even though constraints and disturbances are present. Finally, simulation results are presented using a nonlinear continuous stirred tank reactor model.",
"title": ""
},
{
"docid": "b11592d07491ef9e0f67e257bfba6d84",
"text": "Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape and is limited to observing restricted receptive fields. In earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we provide a detailed analysis of the previously proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters decreases. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.",
"title": ""
},
{
"docid": "1962428380a7ccb6e64d0c7669736e9d",
"text": "This target article presents an integrated evolutionary model of the development of attachment and human reproductive strategies. It is argued that sex differences in attachment emerge in middle childhood, have adaptive significance in both children and adults, and are part of sex-specific life history strategies. Early psychosocial stress and insecure attachment act as cues of environmental risk, and tend to switch development towards reproductive strategies favoring current reproduction and higher mating effort. However, due to sex differences in life history trade-offs between mating and parenting, insecure males tend to adopt avoidant strategies, whereas insecure females tend to adopt anxious/ambivalent strategies, which maximize investment from kin and mates. Females are expected to shift to avoidant patterns when environmental risk is more severe. Avoidant and ambivalent attachment patterns also have different adaptive values for boys and girls, in the context of same-sex competition in the peer group: in particular, the competitive and aggressive traits related to avoidant attachment can be favored as a status-seeking strategy for males. Finally, adrenarche is proposed as the endocrine mechanism underlying the reorganization of attachment in middle childhood, and the implications for the relationship between attachment and sexual development are explored. Sex differences in the development of attachment can be fruitfully integrated within the broader framework of adaptive plasticity in life history strategies, thus contributing to a coherent evolutionary theory of human development.",
"title": ""
},
{
"docid": "510439267c11c53b31dcf0b1c40e331b",
"text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.",
"title": ""
},
{
"docid": "b50efa7b82d929c1b8767e23e8359a06",
"text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.",
"title": ""
},
{
"docid": "16823fdc74b69fe2157be948168e3584",
"text": "In this contribution, various sales forecast models for the German automobile market are developed and tested. Our most important criteria for the assessment of these models are the quality of the prediction as well as an easy explicability. Yearly, quarterly and monthly data for newly registered automobiles from 1992 to 2007 serve as the basis for the tests of these models. The time series model used consists of additive components: trend, seasonal, calendar and error component. The three latter components are estimated univariately while the trend component is estimated multivariately by Multiple Linear Regression as well as by a Support Vector Machine. Possible influences which are considered include macro-economic and market-specific factors. These influences are analysed by a feature selection. We found the non-linear model to be superior. Furthermore, the quarterly data provided the most accurate results.",
"title": ""
}
] |
scidocsrr
|
621f8f5ac4cfa61d973a3b2e66b3c1b1
|
A comparative study of fairness-enhancing interventions in machine learning
|
[
{
"docid": "8e5cbfe1056a75b1116c93d780c00847",
"text": "We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy. In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.",
"title": ""
}
] |
[
{
"docid": "9873c52bd1e8c073ca8c74fbcec6970f",
"text": "In situations where a seller has surplus stock and another seller is stocked out, it may be desirable to transfer surplus stock from the former to the latter. We examine how the possibility of such transshipments between two independent locations affects the optimal inventory orders at each location. If each location aims to maximize its own profits—we call this local decision making—their inventory choices will not, in general, maximize joint profits. We find transshipment prices which induce the locations to choose inventory levels consistent with joint-profit maximization. (Transshipments; Newsvendor Model; Nash Equilibrium)",
"title": ""
},
{
"docid": "9afdbe22f1c4ffe5bf10c2ab6438ef61",
"text": "Click-through rate is a quantity of interest to online advertisers, search engine optimizers, and sponsored search providers alike. The rate at which users click on advertisements presented to them serves as both a metric for evaluating advertising effectiveness and a financial tool for cost and revenue projection. To predict future click-through rate, investigators typically use historical click information since it provides tangible examples of user behavior. In some cases plentiful historical data are available and this method provides a reliable estimate. More often, however, insufficient historical data exists and creative aggregation must fill the gap. In this work, we hypothesize that different terms have an inherently different likelihood of receiving a sponsored click. For example, the search terms “digital camera” and “brain structure” clearly express more and less shoppingoriented intent, respectively. We seek to estimate a termlevel click-through rate (CTR) reflecting these inherent differences. At times even aggregation to the term level leaves us with insufficient historical data for a confident estimate, so we also propose the use of clusters of related terms for less frequent, or even completely novel, terms. We reviewed other broad estimates of CTR (rank CTR and query-volume decile CTR) and compared them to the estimates computed using hierarchical clusters of related terms. We found that using historical data aggregated by cluster leads to more accurate estimates on average of term-level CTR for terms with little or no historical data.",
"title": ""
},
{
"docid": "ee8ac41750c7d1545af54e812d7f2d9c",
"text": "The demand for more sophisticated Location-Based Services (LBS) in terms of applications variety and accuracy is tripling every year since the emergence of the smartphone a few years ago. Equally, smartphone manufacturers are mounting several wireless communication and localization technologies, inertial sensors as well as powerful processing capability, to cater to such LBS applications. A hybrid of wireless technologies is needed to provide seamless localization solutions and to improve accuracy, to reduce time to fix, and to reduce power consumption. The review of localization techniques/technologies of this emerging field is therefore important. This article reviews the recent research-oriented and commercial localization solutions on smartphones. The focus of this article is on the implementation challenges associated with utilizing these positioning solutions on Android-based smartphones. Furthermore, the taxonomy of smartphone-location techniques is highlighted with a special focus on the detail of each technique and its hybridization. The article compares the indoor localization techniques based on accuracy, utilized wireless technology, overhead, and localization technique used. The pursuit of achieving ubiquitous localization outdoors and indoors for critical LBS applications such as security and safety shall dominate future research efforts.",
"title": ""
},
{
"docid": "3b8e716e658176cebfbdb313c8cb22ac",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "f435edc49d4907e8132f436cc43338db",
"text": "OBJECTIVE\nDepression is common among patients with diabetes, but its relationship to glycemic control has not been systematically reviewed. Our objective was to determine whether depression is associated with poor glycemic control.\n\n\nRESEARCH DESIGN AND METHODS\nMedline and PsycINFO databases and published reference lists were used to identify studies that measured the association of depression with glycemic control. Meta-analytic procedures were used to convert the findings to a common metric, calculate effect sizes (ESs), and statistically analyze the collective data.\n\n\nRESULTS\nA total of 24 studies satisfied the inclusion and exclusion criteria for the meta-analysis. Depression was significantly associated with hyperglycemia (Z = 5.4, P < 0.0001). The standardized ES was in the small-to-moderate range (0.17) and was consistent, as the 95% CI was narrow (0.13-0.21). The ES was similar in studies of either type 1 or type 2 diabetes (ES 0.19 vs. 0.16) and larger when standardized interviews and diagnostic criteria rather than self-report questionnaires were used to assess depression (ES 0.28 vs. 0.15).\n\n\nCONCLUSIONS\nDepression is associated with hyperglycemia in patients with type 1 or type 2 diabetes. Additional studies are needed to establish the directional nature of this relationship and to determine the effects of depression treatment on glycemic control and the long-term course of diabetes.",
"title": ""
},
{
"docid": "af4106bc4051e01146101aeb58a4261f",
"text": "In recent years a great amount of research has focused on algorithms that learn features from unlabeled data. In this work we propose a model based on the Self-Organizing Map (SOM) neural network to learn features useful for the problem of automatic natural images classification. In particular we use the SOM model to learn single-layer features from the extremely challenging CIFAR-10 dataset, containing 60.000 tiny labeled natural images, and subsequently use these features with a pyramidal histogram encoding to train a linear SVM classifier. Despite the large number of images, the proposed feature learning method requires only few minutes on an entry-level system, however we show that a supervised classifier trained with learned features provides significantly better results than using raw pixels values or other handcrafted features designed specifically for image classification. Moreover, exploiting the topological property of the SOM neural network, it is possible to reduce the number of features and speed up the supervised training process combining topologically close neurons, without repeating the feature learning process.",
"title": ""
},
{
"docid": "4b78f107ee628cefaeb80296e4f9ae27",
"text": "On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection.\n In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low-diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.\n Our design is implemented in a few hundred lines of X10. On the binomial tree described in olivier:08}, the program achieve 87% efficiency on an Infiniband cluster of 1024 Power7 cores, with a peak throughput of 2.37 GNodes/sec. It achieves 87% efficiency on a Blue Gene/P with 2048 processors, and a peak throughput of 0.966 GNodes/s. All numbers are relative to single core sequential performance. This implementation has been refactored into a reusable global load balancing framework. Applications can use this framework to obtain global load balance with minimal code changes.\n In summary, we claim: (a) the first formulation of UTS that does not involve application level global termination detection, (b) the introduction of lifeline graphs to reduce failed steals (c) the demonstration of simple lifeline graphs based on k-hypercubes, (d) performance with superior efficiency (or the same efficiency but over a wider range) than published results on UTS. In particular, our framework can deliver the same or better performance as an unrestricted random work-stealing implementation, while reducing the number of attempted steals.",
"title": ""
},
{
"docid": "96bdef1ad2f90a3f6591339acd569ce5",
"text": "Is there a difference between believing and merely understanding an idea?Descartes thought so. He considered the acceptance and rejection of an idea to be alternative outcomes of an effortful assessment process that occurs subsequent to the automatic comprehension of that idea. This article examined Spinoza's alternative suggestion that (a) the acceptance of an idea is part of the automatic comprehension of that idea and (b) the rejection of an idea occurs subsequent to, and more effortfully than, its acceptance. In this view, the mental representation of abstract ideas is quite similar to the mental representation of physical objects: People believe in the ideas they comprehend, as quickly and automatically as they believe in the objects they see. Research in social and cognitive psychology suggests that Spinoza's model may be a more accurate account of human belief than is that of Descartes.",
"title": ""
},
{
"docid": "1eb4805e6874ea1882a995d0f1861b80",
"text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.",
"title": ""
},
{
"docid": "e5c6debcbbb979a18ca13f7739043174",
"text": "Recurrent neural networks and sequence to sequence models require a predetermined length for prediction output length. Our model addresses this by allowing the network to predict a variable length output in inference. A new loss function with a tailored gradient computation is developed that trades off prediction accuracy and output length. The model utilizes a function to determine whether a particular output at a time should be evaluated or not given a predetermined threshold. We evaluate the model on the problem of predicting the prices of securities. We find that the model makes longer predictions for more stable securities and it naturally balances prediction accuracy and length.",
"title": ""
},
{
"docid": "e2308b435dddebc422ff49a7534bbf83",
"text": "Memory encryption has yet to be used at the core of operating system designs to provide confidentiality of code and data. As a result, numerous vulnerabilities exist at every level of the software stack. Three general approaches have evolved to rectify this problem. The most popular approach is based on complex hardware enhancements; this allows all encryption and decryption to be conducted within a well-defined trusted boundary. Unfortunately, these designs have not been integrated within commodity processors and have primarily been explored through simulation with very few prototypes. An alternative approach has been to augment existing hardware with operating system enhancements for manipulating keys, providing improved trust. This approach has provided insights into the use of encryption but has involved unacceptable overheads and has not been adopted in commercial operating systems. Finally, specialized industrial devices have evolved, potentially adding coprocessors, to increase security of particular operations in specific operating environments. However, this approach lacks generality and has introduced unexpected vulnerabilities of its own. Recently, memory encryption primitives have been integrated within commodity processors such as the Intel i7, AMD bulldozer, and multiple ARM variants. This opens the door for new operating system designs that provide confidentiality across the entire software stack outside the CPU. To date, little practical experimentation has been conducted, and the improvements in security and associated performance degradation has yet to be quantified. This article surveys the current memory encryption literature from the viewpoint of these central issues.",
"title": ""
},
{
"docid": "0860b29f52d403a0ff728a3e356ec071",
"text": "Neuroanatomy has entered a new era, culminating in the search for the connectome, otherwise known as the brain's wiring diagram. While this approach has led to landmark discoveries in neuroscience, potential neurosurgical applications and collaborations have been lagging. In this article, the authors describe the ideas and concepts behind the connectome and its analysis with graph theory. Following this they then describe how to form a connectome using resting state functional MRI data as an example. Next they highlight selected insights into healthy brain function that have been derived from connectome analysis and illustrate how studies into normal development, cognitive function, and the effects of synthetic lesioning can be relevant to neurosurgery. Finally, they provide a précis of early applications of the connectome and related techniques to traumatic brain injury, functional neurosurgery, and neurooncology.",
"title": ""
},
{
"docid": "0576c4553dbfc2bbbe0e0d88afb890b3",
"text": "This review covers the toxicology of mercury and its compounds. Special attention is paid to those forms of mercury of current public health concern. Human exposure to the vapor of metallic mercury dates back to antiquity but continues today in occupational settings and from dental amalgam. Health risks from methylmercury in edible tissues of fish have been the subject of several large epidemiological investigations and continue to be the subject of intense debate. Ethylmercury in the form of a preservative, thimerosal, added to certain vaccines, is the most recent form of mercury that has become a public health concern. The review leads to general discussion of evolutionary aspects of mercury, protective and toxic mechanisms, and ends on a note that mercury is still an \"element of mystery.\"",
"title": ""
},
{
"docid": "dd4322e25b26b501cf60f9b42a7aa575",
"text": "a r t i c l e i n f o In organizations today, the risk of poor information quality is becoming increasingly high as larger and more complex information resources are being collected and managed. To mitigate this risk, decision makers assess the quality of the information provided by their IS systems in order to make effective decisions based on it. To do so, they may rely on quality metadata: objective quality measurements tagged by data managers onto the information used by decision makers. Decision makers may also gauge information quality on their own, subjectively and contextually assessing the usefulness of the information for solving the specific task at hand. Although information quality has been defined as fitness for use, models of information quality assessment have thus far tended to ignore the impact of contextual quality on information use and decision outcomes. Contextual assessments can be as important as objective quality indicators because they can affect which information gets used for decision making tasks. This research offers a theoretical model for understanding users' contextual information quality assessment processes. The model is grounded in dual-process theories of human cognition, which enable simultaneous evaluation of both objective and contextual information quality attributes. Findings of an exploratory laboratory experiment suggest that the theoretical model provides an avenue for understanding contextual aspects of information quality assessment in concert with objective ones. The model offers guidance for the design of information environments that can improve performance by integrating both objective and subjective aspect of users' quality assessments. Organizational data is a critical resource that supports business processes and managerial decision making. Advances in information technology have enabled organizations to collect and store more data than ever before. This data is processed in a variety of different and complex ways to generate information that serves as input to organizational decision tasks. As data volumes increase, so does the complexity of managing it and the risks of poor data quality. Poor quality data can be detrimental to system usability and hinder operational performance, leading to flawed decisions [27]. It can also damage organizational reputation, heighten risk exposure, and cause significant capital losses [28]. While international figures are difficult to determine, data quality problems currently cost U.S. businesses over $600 billion annually [1]. Data quality is hence an important area of concern to both practitioners and researchers. Data quality researchers have used the terms \" data quality …",
"title": ""
},
{
"docid": "d9cb2d92cb2a2fd4170d3b51f4be7d65",
"text": "Precision Agriculture (PA) is the management of spatial and temporal variability of the fields. This management concept incorporates a range of management as well as ICT tools to assess and treat the variability within the field. The adoption and diffusion of PA is discussed in this chapter. PA has been practiced over the last 15 years mostly in North America and Northern Europe. Despite its promises, PA has not yet managed to be adopted widely by farmers. This chapter’s results are based on the findings of six mail surveys, focus groups and personal interviews with PA practitioners in the UK, Denmark and the USA over six years (1998-2003). The information related to ICT adoption in PA is presented, including software and hardware aspects, data ownership, data handling, data interpretation, internet and e-mail use, as well as information preferences to invest and practice in PA. It is a common belief that the use of PA technologies has been tremendously improved over the years. However, findings from these studies showed that lack of agronomic and technical skills are key problems for adopting PA practices and there is an urgent need for holistic decision support systems. Moreover, compatibility between hardware and software, as well as user friendliness and particularly time consumption are for many users a serious impediment for PA adoption. An overview of Precision Agriculture Precision agriculture applications and trends The recent advances in information and telecommunication technologies have allowed farmers to acquire vast amounts of site-specific data for their fields, with the ultimate aim being to reduce uncertainty in decision-making (National Research Council, 1997; Blackmore, 2000). Precision Agriculture (PA) or site-specific crop management can be defined as the management of spatial and temporal variability at a sub-field level to improve economic returns and reduce environmental impact (Blackmore et al., 2003). Within the concept of PA the main activities are data collection and processing and variable rate applications of inputs. The tools available consist of a wide range of techniques and technologies from information and communication technology as well as sensor and application technologies, farm management and economics.",
"title": ""
},
{
"docid": "1518c78b41e5b93a995144604921e857",
"text": "This study models the electrical contact resistance (ECR) between two surfaces separated by an anisotropic conductive film. The film is made up of an epoxy with conductive spherical particles(metallic) dispersed within. In practical situations the particles are often heavily loaded and will undergo severe plastic deformation and may essentially be flattened out. In between the particles and the surfaces there may also be an ultra-thin insulating film (consisting of epoxy) which causes considerable electrical resistance between the surfaces. In the past this effect has been neglected and the predicted ECR was much lower than that measured experimentally. This added resistance is considered using electron tunneling theory. The severe plastic deformation of the spherical particles is modeled using a new expanded elasto-plastic spherical contact model. This work also investigates the effect of compression of the separating epoxy film on the electrical contact resistance. The model finds that the high experimental ECR measurements can be accounted for by including the existence of a thin insulating film through the electron tunneling model",
"title": ""
},
{
"docid": "b291e4282b7697eec7dfd4b6b3d09ffb",
"text": "BACKGROUND\nThe optimal duration of dual-antiplatelet therapy and the risk-benefit ratio for long-term dual-antiplatelet therapy after coronary stenting remain poorly defined. We evaluated the impact of up to 6 versus 24 months of dual-antiplatelet therapy in a broad all-comers patient population receiving a balanced proportion of Food and Drug Administration-approved drug-eluting or bare-metal stents.\n\n\nMETHODS AND RESULTS\nWe randomly assigned 2013 patients to receive bare-metal, zotarolimus-eluting, paclitaxel-eluting, or everolimus-eluting stent implantation. At 30 days, patients in each stent group were randomly allocated to receive up to 6 or 24 months of clopidogrel therapy in addition to aspirin. The primary end point was a composite of death of any cause, myocardial infarction, or cerebrovascular accident. The cumulative risk of the primary outcome at 2 years was 10.1% with 24-month dual-antiplatelet therapy compared with 10.0% with 6-month dual-antiplatelet therapy (hazard ratio, 0.98; 95% confidence interval, 0.74-1.29; P=0.91). The individual risks of death, myocardial infarction, cerebrovascular accident, or stent thrombosis did not differ between the study groups; however, there was a consistently greater risk of hemorrhage in the 24-month clopidogrel group according to all prespecified bleeding definitions, including the recently proposed Bleeding Academic Research Consortium classification.\n\n\nCONCLUSIONS\nA regimen of 24 months of clopidogrel therapy in patients who had received a balanced mixture of drug-eluting or bare-metal stents was not significantly more effective than a 6-month clopidogrel regimen in reducing the composite of death due to any cause, myocardial infarction, or cerebrovascular accident.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: http://www.clinicaltrials.gov. Unique identifier: NCT00611286.",
"title": ""
},
{
"docid": "b162b2efcd66a9a254e3b8473a5d62f6",
"text": "The rhetoric of both the Brexit and Trump campaigns was grounded in conceptions of the past as the basis for political claims in the present. Both established the past as constituted by nations that were represented as 'white' into which racialized others had insinuated themselves and gained disproportionate advantage. Hence, the resonant claim that was broadcast primarily to white audiences in each place 'to take our country back'. The politics of both campaigns was also echoed in those social scientific analyses that sought to focus on the 'legitimate' claims of the 'left behind' or those who had come to see themselves as 'strangers in their own land'. The skewing of white majority political action as the action of a more narrowly defined white working class served to legitimize analyses that might otherwise have been regarded as racist. In effect, I argue that a pervasive 'methodological whiteness' has distorted social scientific accounts of both Brexit and Trump's election victory and that this needs to be taken account of in our discussion of both phenomena.",
"title": ""
},
{
"docid": "36251aafaca23765da789f914ed299fb",
"text": "Fine-grained image categorization is a challenging task aiming at distinguishing objects belonging to the same basic-level category, e.g., leaf or mushroom. It is a useful technique that can be applied for species recognition, face verification, and so on. Most of the existing methods either have difficulties to detect discriminative object components automatically, or suffer from the limited amount of training data in each sub-category. To solve these problems, this paper proposes a new fine-grained image categorization model. The key is a dense graph mining algorithm that hierarchically localizes discriminative object parts in each image. More specifically, to mimic the human hierarchical perception mechanism, a superpixel pyramid is generated for each image. Thereby, graphlets from each layer are constructed to seamlessly capture object components. Intuitively, graphlets representative to each super-/sub-category is densely distributed in their feature space. Thus, a dense graph mining algorithm is developed to discover graphlets representative to each super-/sub-category. Finally, the discovered graphlets from pairwise images are integrated into an image kernel for fine-grained recognition. Theoretically, the learned kernel can generalize several state-of-the-art image kernels. Experiments on nine image sets demonstrate the advantage of our method. Moreover, the discovered graphlets from each sub-category accurately capture those tiny discriminative object components, e.g., bird claws, heads, and bodies.",
"title": ""
}
] |
scidocsrr
|
57764c67196cebde8e4caf99dca4a24e
|
Meticillin-resistant Staphylococcus pseudintermedius: clinical challenge and treatment options.
|
[
{
"docid": "816bd541fd0f5cc509ad69cfed5d3e6e",
"text": "It has been shown that people and pets can harbour identical strains of meticillin-resistant (MR) staphylococci when they share an environment. Veterinary dermatology practitioners are a professional group with a high incidence of exposure to animals infected by Staphylococcus spp. The objective of this study was to assess the prevalence of carriage of MR Staphylococcus aureus (MRSA), MR S. pseudintermedius (MRSP) and MR S. schleiferi (MRSS) by veterinary dermatology practice staff and their personal pets. A swab technique and selective media were used to screen 171 veterinary dermatology practice staff and their respective pets (258 dogs and 160 cats). Samples were shipped by over-night carrier. Human subjects completed a 22-question survey of demographic and epidemiologic data relevant to staphylococcal transmission. The 171 human-source samples yielded six MRSA (3.5%), nine MRSP (5.3%) and four MRSS (2.3%) isolates, while 418 animal-source samples yielded eight MRSA (1.9%) 21 MRSP (5%), and two MRSS (0.5%) isolates. Concordant strains (genetically identical by pulsed-field gel electrophoresis) were isolated from human subjects and their respective pets in four of 171 (2.9%) households: MRSA from one person/two pets and MRSP from three people/three pets. In seven additional households (4.1%), concordant strains were isolated from only the pets: MRSA in two households and MRSP in five households. There were no demographic or epidemiologic factors statistically associated with either human or animal carriage of MR staphylococci, or with concordant carriage by person-pet or pet-pet pairs. Lack of statistical associations may reflect an underpowered study.",
"title": ""
}
] |
[
{
"docid": "ced8cc9329777cc01cdb3e91772a29c2",
"text": "Manually annotating clinical document corpora to generate reference standards for Natural Language Processing (NLP) systems or Machine Learning (ML) is a timeconsuming and labor-intensive endeavor. Although a variety of open source annotation tools currently exist, there is a clear opportunity to develop new tools and assess functionalities that introduce efficiencies into the process of generating reference standards. These features include: management of document corpora and batch assignment, integration of machine-assisted verification functions, semi-automated curation of annotated information, and support of machine-assisted pre-annotation. The goals of reducing annotator workload and improving the quality of reference standards are important considerations for development of new tools. An infrastructure is also needed that will support largescale but secure annotation of sensitive clinical data as well as crowdsourcing which has proven successful for a variety of annotation tasks. We introduce the Extensible Human Oracle Suite of Tools (eHOST) http://code.google.com/p/ehost that provides such functionalities that when coupled with server integration offer an end-to-end solution to carry out small or large scale as well as crowd sourced annotation projects.",
"title": ""
},
{
"docid": "fda6123a2e3c67329b689c13bda8feda",
"text": "We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide’s map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.",
"title": ""
},
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "27461d678b02fff9a1aaf5621f5b347a",
"text": "Despite the promise of technology in education, many practicing teachers face several challenges when trying to effectively integrate technology into their classroom instruction. Additionally, while national statistics cite a remarkable improvement in access to computer technology tools in schools, teacher surveys show consistent declines in the use and integration of computer technology to enhance student learning. This article reports on primary technology integration barriers that mathematics teachers identified when using technology in their classrooms. Suggestions to overcome some of these barriers are also provided.",
"title": ""
},
{
"docid": "408d3db3b2126990611fdc3a62a985ea",
"text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "fde2eb0bb00d2173719f9f5715faa9b9",
"text": "Multi-instance learning, like other machine learning and data mining tasks, requires distance metrics. Although metric learning methods have been studied for many years, metric learners for multi-instance learning remain almost untouched. In this paper, we propose a framework called Multi-Instance MEtric Learning (MIMEL) to learn an appropriate distance under the multi-instance setting. The distance metric between two bags is defined using the Mahalanobis distance function. The problem is formulated by minimizing the KL divergence between two multivariate Gaussians under the constraints of maximizing the between-class bag distance and minimizing the within-class bag distance. To exploit the mechanism of how instances determine bag labels in multi-instance learning, we design a nonparametric density-estimation-based weighting scheme to assign higher “weights†to the instances that are more likely to be positive in positive bags. The weighting scheme itself has a small workload, which adds little extra computing costs to the proposed framework. Moreover, to further boost the classification accuracy, a kernel version of MIMEL is presented. We evaluate MIMEL, using not only several typical multi-instance tasks, but also two activity recognition datasets. The experimental results demonstrate that MIMEL achieves better classification accuracy than many state-of-the-art distance based algorithms or kernel methods for multi-instance learning.",
"title": ""
},
{
"docid": "6a0f8e2858ca4c67b281d4130ded4eba",
"text": "This paper presents a novel design of an omni-directional spherical robot that is mainly composed of a lucent ball-shaped shell and an internal driving unit. Two motors installed on the internal driving unit are used to realize the omni-directional motion of the robot, one motor is used to make the robot move straight and another is used to make it steer. Its motion analysis, kinematics modeling and controllability analysis are presented. Two typical motion simulations show that the unevenness of ground has big influence on the open-loop trajectory tracking of the robot. At last, motion performance of this spherical robot in several typical environments is presented with prototype experiments.",
"title": ""
},
{
"docid": "3ff330ab15962b09584e1636de7503ea",
"text": "By diverting funds away from legitimate partners (a.k.a publishers), click fraud represents a serious drain on advertising budgets and can seriously harm the viability of the internet advertising market. As such, fraud detection algorithms which can identify fraudulent behavior based on user click patterns are extremely valuable. Based on the BuzzCity dataset, we propose a novel approach for click fraud detection which is based on a set of new features derived from existing attributes. The proposed model is evaluated in terms of the resulting precision, recall and the area under the ROC curve. A final ensemble model based on 6 different learning algorithms proved to be stable with respect to all 3 performance indicators. Our final model shows improved results on training, validation and test datasets, thus demonstrating its generalizability to different datasets.",
"title": ""
},
{
"docid": "bf49aafc53fd8083d5f4e7e015443a71",
"text": "BACKGROUND\nThree intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD).\n\n\nOBJECTIVE\n1) To describe three main large-scale networks of the human brain; 2) to discuss the functioning of these neural networks in PTSD and related symptoms; and 3) to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders.\n\n\nMETHODS\nLiterature relevant to this commentary was reviewed.\n\n\nRESULTS\nIncreasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network), increased and decreased arousal/interoception (salience network), and an altered sense of self (default mode network). Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed.\n\n\nCONCLUSIONS\nNeuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.",
"title": ""
},
{
"docid": "fd7b9c5ab4379a277f0b39d6f54bcc18",
"text": "This article presents two probabilistic models for answering ranking in the multilingual question-answering (QA) task, which finds exact answers to a natural language question written in different languages. Although some probabilistic methods have been utilized in traditional monolingual answer-ranking, limited prior research has been conducted for answer-ranking in multilingual question-answering with formal methods. This article first describes a probabilistic model that predicts the probabilities of correctness for individual answers in an independent way. It then proposes a novel probabilistic method to jointly predict the correctness of answers by considering both the correctness of individual answers as well as their correlations. As far as we know, this is the first probabilistic framework that proposes to model the correctness and correlation of answer candidates in multilingual question-answering and provide a novel approach to design a flexible and extensible system architecture for answer selection in multilingual QA. An extensive set of experiments were conducted to show the effectiveness of the proposed probabilistic methods in English-to-Chinese and English-to-Japanese cross-lingual QA, as well as English, Chinese, and Japanese monolingual QA using TREC and NTCIR questions.",
"title": ""
},
{
"docid": "ff5f7772a0a578cfe1dd08816af8e2e7",
"text": "Moisture-associated skin damage (MASD) occurs when there is prolonged exposure of the skin to excessive amounts of moisture from incontinence, wound exudate or perspiration. Incontinenceassociated dermatitis (IAD) relates specifically to skin breakdown from faecal and/or urinary incontinence (Beeckman et al, 2009), and has been defined as erythema and oedema of the skin surface, which may be accompanied by bullae with serous exudate, erosion or secondary cutaneous infection (Gray et al, 2012). IAD may also be referred to as a moisture lesion, moisture ulcer, perineal dermatitis or diaper dermatitis (Ousey, 2012). The effects of ageing on the skin are known to affect skin integrity, as is the underdeveloped nature of very young skin; as such, elderly patients and neonates are particularly vulnerable to damage from moisture (Voegeli, 2007). The increase in moisture resulting from episodes of incontinence is exacerbated due to bacterial and enzymatic activity associated with urine and faeces, particularly when both are present, which leads to an increase in skin pH alongside over-hydration of the skin surface. This damages the natural protection of the acid mantle, the skin’s naturally acidic pH, which is an important defence mechanism against external irritants and microorganisms. This damage leads to the breakdown of vulnerable skin and increased susceptibility to secondary infection (Beeckman et al, 2009). It has become well recognised that presence of IAD greatly increases the likelihood of pressure ulcer development, since over-hydrated skin is much more susceptible to damage by extrinsic factors such as pressure, friction and shear as compared with normal skin (Clarke et al, 2010). While it is important to firstly understand that pressure and moisture damage are separate aetiologies and, secondly, be able to recognise the clinical differences in presentation, one of the factors to consider for prevention of pressure ulcers is minimising exposure to moisture/ incontinence. Another important consideration with IAD is the effect on the patient. IAD can be painful and debilitating, and has been associated with reduced quality of life. It can also be time-consuming and expensive to treat, which has an impact on clinical resources and financial implications (Doughty et al, 2012). IAD is known to impact on direct Incontinence-associated dermatitis (IAD) relates to skin breakdown from exposure to urine or faeces, and its management involves implementation of structured skin care regimens that incorporate use of appropriate skin barrier products to protect the skin from exposure to moisture and irritants. Medi Derma-Pro Foam & Spray Cleanser and Medi Derma-Pro Skin Protectant Ointment are recent additions to the Total Barrier ProtectionTM (Medicareplus International) range indicated for management of moderateto-severe IAD and other moisture-associated skin damage. This article discusses a series of case studies and product evaluations performed to determine clinical outcomes and clinician feedback based on use of the Medi Derma-Pro skin barrier products to manage IAD. Results showed improvements to patients’ skin condition following use of Medi Derma-Pro, and the cleanser and skin protectant ointment were considered better than or the same as the most equivalent products on the market.",
"title": ""
},
{
"docid": "94da9faa1ff45cfc5c8a8032d89cdd8f",
"text": "The RNA genome of human immunodeficiency virus type 1 (HIV-1) is enclosed in a cone-shaped capsid shell that disassembles following cell entry via a process known as uncoating. During HIV-1 infection, the capsid is important for reverse transcription and entry of the virus into the target cell nucleus. The small molecule PF74 inhibits HIV-1 infection at early stages by binding to the capsid and perturbing uncoating. However, the mechanism by which PF74 alters capsid stability and reduces viral infection is presently unknown. Here, we show, using atomic force microscopy (AFM), that binding of PF74 to recombinant capsid-like assemblies and to HIV-1 isolated cores stabilizes the capsid in a concentration-dependent manner. At a PF74 concentration of 10 μM, the mechanical stability of the core is increased to a level similar to that of the intrinsically hyperstable capsid mutant E45A. PF74 also prevented the complete disassembly of HIV-1 cores normally observed during 24 h of reverse transcription. Specifically, cores treated with PF74 only partially disassembled: the main body of the capsid remained intact and stiff, and a cap-like structure dissociated from the narrow end of the core. Moreover, the internal coiled structure that was observed to form during reverse transcription in vitro persisted throughout the duration of the measurement (∼24 h). Our results provide direct evidence that PF74 directly stabilizes the HIV-1 capsid lattice, thereby permitting reverse transcription while interfering with a late step in uncoating.IMPORTANCE The capsid-binding small molecule PF74 inhibits HIV-1 infection at early stages and perturbs uncoating. However, the mechanism by which PF74 alters capsid stability and reduces viral infection is presently unknown. We recently introduced time-lapse atomic force microscopy to study the morphology and physical properties of HIV-1 cores during the course of reverse transcription. Here, we apply this AFM methodology to show that PF74 prevented the complete disassembly of HIV-1 cores normally observed during 24 h of reverse transcription. Specifically, cores with PF74 only partially disassembled: the main body of the capsid remained intact and stiff, but a cap-like structure dissociated from the narrow end of the core HIV-1. Our result provides direct evidence that PF74 directly stabilizes the HIV-1 capsid lattice.",
"title": ""
},
{
"docid": "ab00048e25a3852c1f75014ac2529d52",
"text": "This paper describes a reference-clock-free, high-time-resolution on-chip timing jitter measurement circuit using a self-referenced clock and a cascaded time difference amplifier (TDA) with duty-cycle compensation. A self-referenced clock with multiples of the clock period removes the necessity for a reference clock. In addition, a cascaded TDA with duty-cycle compensation improves the time resolution while maintaining the operational speed. Test chips were designed and fabricated using 65 nm and 40 nm CMOS technologies. The areas occupied by the circuits are 1350 μm2 (with TDA, 65 nm), 490 μm2 (without TDA, 65 nm), 470 μm2 (with TDA, 40 nm), and 112 μm2 (without TDA, 40 nm). Time resolutions of 31 fs (with TDA) and 2.8 ps (without TDA) were achieved. The proposed new architecture provides all-digital timing jitter measurement with fine-time-resolution measurement capability, without requiring a reference clock.",
"title": ""
},
{
"docid": "85fb2cb99e5320ddde182d6303164da8",
"text": "The uncertainty about whether, in China, the genus Melia (Meliaceae) consists of one species (M. azedarach Linnaeus) or two species (M. azedarach and M. toosendan Siebold & Zuccarini) remains to be clarified. Although the two putative species are morphologically distinguishable, genetic evidence supporting their taxonomic separation is lacking. Here, we investigated the genetic diversity and population structure of 31 Melia populations across the natural distribution range of the genus in China. We used sequence-related amplified polymorphism (SRAP) markers and obtained 257 clearly defined bands amplified by 20 primers from 461 individuals. The polymorphic loci (P) varied from 35.17% to 76.55%, with an overall mean of 58.24%. Nei’s gene diversity (H) ranged from 0.13 to 0.31, with an overall mean of 0.20. Shannon’s information index (I) ranged from 0.18 to 0.45, with an average of 0.30. The genetic diversity of the total population (Ht) and within populations (Hs) was 0.37 ̆ 0.01 and 0.20 ̆ 0.01, respectively. Population differentiation was substantial (Gst = 0.45), and gene flow was low. Of the total variation, 31.41% was explained by differences among putative species, 19.17% among populations within putative species, and 49.42% within populations. Our results support the division of genus Melia into two species, which is consistent with the classification based on the morphological differentiation.",
"title": ""
},
{
"docid": "2c14b3968aadadaa62f569acccb37d46",
"text": "The main objective of this paper is to review the technologies and models used in the Automatic music transcription system. Music Information Retrieval is a key problem in the field of music signal analysis and this can be achieved with the use of music transcription systems. It has proven to be a very difficult issue because of the complex and deliberately overlapped spectral structure of musical harmonies. Generally, the music transcription systems branched as automatic and semi-automatic approaches based on the user interventions needed in the transcription system. Among these we give a close view of the automatic music transcription systems. Different models and techniques were proposed so far in the automatic music transcription systems. However the performance of the systems derived till now not completely matched to the performance of a human expert. In this paper we go through the techniques used previously for the music transcription and discuss the limitations with them. Also, we give some directions for the enhancement of the music transcription system and this can be useful for the researches to develop fully automatic music transcription system.",
"title": ""
},
{
"docid": "2a39202664217724ea0a49ceb83a82af",
"text": "This article proposes a competitive divide-and-conquer algorithm for solving large-scale black-box optimization problems for which there are thousands of decision variables and the algebraic models of the problems are unavailable. We focus on problems that are partially additively separable, since this type of problem can be further decomposed into a number of smaller independent subproblems. The proposed algorithm addresses two important issues in solving large-scale black-box optimization: (1) the identification of the independent subproblems without explicitly knowing the formula of the objective function and (2) the optimization of the identified black-box subproblems. First, a Global Differential Grouping (GDG) method is proposed to identify the independent subproblems. Then, a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is adopted to solve the subproblems resulting from its rotation invariance property. GDG and CMA-ES work together under the cooperative co-evolution framework. The resultant algorithm, named CC-GDG-CMAES, is then evaluated on the CEC’2010 large-scale global optimization (LSGO) benchmark functions, which have a thousand decision variables and black-box objective functions. The experimental results show that, on most test functions evaluated in this study, GDG manages to obtain an ideal partition of the index set of the decision variables, and CC-GDG-CMAES outperforms the state-of-the-art results. Moreover, the competitive performance of the well-known CMA-ES is extended from low-dimensional to high-dimensional black-box problems.",
"title": ""
},
{
"docid": "335a551d08afd6af7d90b35b2df2ecc4",
"text": "The interpretation of colonic biopsies related to inflammatory conditions can be challenging because the colorectal mucosa has a limited repertoire of morphologic responses to various injurious agents. Only few processes have specific diagnostic features, and many of the various histological patterns reflect severity and duration of the disease. Importantly the correlation with endoscopic and clinical information is often cardinal to arrive at a specific diagnosis in many cases.",
"title": ""
},
{
"docid": "a0aeb6f8f888fe53e7de4bad385c55fe",
"text": "Data transmission, storage and processing are the integral parts of today’s information systems. Transmission and storage of huge volume of data is a critical task in spite of the advancements in the integrated circuit technology and communication. In order to store and transmit such a data as it is, requires larger memory and increased bandwidth utilization. This in turn increases the hardware and transmission cost. Hence, before storage or transmission the size of data has to be reduced without affecting the information content of the data. Among the various encoding algorithms, the Lempel Ziv Marcov chain Algorithm (LZMA) algorithm which is used in 7zip was proved to be effective in unknown byte stream compression for reliable lossless data compression. However the encoding speed of software based coder is slow compared to the arrival time of real time data. Hence hardware implementation is needed since number of instructions processed per unit time depends directly on system clock. The aim of this work is to implement the LZMA algorithm on SPARTAN 3E FPGA to design hardware encoder/decoder with reduces circuit size and cost of storage. General Terms Data Compression, VLSI",
"title": ""
},
{
"docid": "eb2e440b20fa3a3d99f70f4b89f6c216",
"text": "The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.",
"title": ""
},
{
"docid": "d047231a67ca02c525d174b315a0838d",
"text": "The goal of this article is to review the progress of three-electron spin qubits from their inception to the state of the art. We direct the main focus towards the exchange-only qubit (Bacon et al 2000 Phys. Rev. Lett. 85 1758-61, DiVincenzo et al 2000 Nature 408 339) and its derived versions, e.g. the resonant exchange (RX) qubit, but we also discuss other qubit implementations using three electron spins. For each three-spin qubit we describe the qubit model, the envisioned physical realization, the implementations of single-qubit operations, as well as the read-out and initialization schemes. Two-qubit gates and decoherence properties are discussed for the RX qubit and the exchange-only qubit, thereby completing the list of requirements for quantum computation for a viable candidate qubit implementation. We start by describing the full system of three electrons in a triple quantum dot, then discuss the charge-stability diagram, restricting ourselves to the relevant subsystem, introduce the qubit states, and discuss important transitions to other charge states (Russ et al 2016 Phys. Rev. B 94 165411). Introducing the various qubit implementations, we begin with the exchange-only qubit (DiVincenzo et al 2000 Nature 408 339, Laird et al 2010 Phys. Rev. B 82 075403), followed by the RX qubit (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502), the spin-charge qubit (Kyriakidis and Burkard 2007 Phys. Rev. B 75 115324), and the hybrid qubit (Shi et al 2012 Phys. Rev. Lett. 108 140503, Koh et al 2012 Phys. Rev. Lett. 109 250503, Cao et al 2016 Phys. Rev. Lett. 116 086801, Thorgrimsson et al 2016 arXiv:1611.04945). The main focus will be on the exchange-only qubit and its modification, the RX qubit, whose single-qubit operations are realized by driving the qubit at its resonant frequency in the microwave range similar to electron spin resonance. Two different types of two-qubit operations are presented for the exchange-only qubits which can be divided into short-ranged and long-ranged interactions. Both of these interaction types are expected to be necessary in a large-scale quantum computer. The short-ranged interactions use the exchange coupling by placing qubits next to each other and applying exchange-pulses (DiVincenzo et al 2000 Nature 408 339, Fong and Wandzura 2011 Quantum Inf. Comput. 11 1003, Setiawan et al 2014 Phys. Rev. B 89 085314, Zeuch et al 2014 Phys. Rev. B 90 045306, Doherty and Wardrop 2013 Phys. Rev. Lett. 111 050503, Shim and Tahan 2016 Phys. Rev. B 93 121410), while the long-ranged interactions use the photons of a superconducting microwave cavity as a mediator in order to couple two qubits over long distances (Russ and Burkard 2015 Phys. Rev. B 92 205412, Srinivasa et al 2016 Phys. Rev. B 94 205421). The nature of the three-electron qubit states each having the same total spin and total spin in z-direction (same Zeeman energy) provides a natural protection against several sources of noise (DiVincenzo et al 2000 Nature 408 339, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Kempe et al 2001 Phys. Rev. A 63 042307, Russ and Burkard 2015 Phys. Rev. B 91 235411). The price to pay for this advantage is an increase in gate complexity. We also take into account the decoherence of the qubit through the influence of magnetic noise (Ladd 2012 Phys. Rev. B 86 125408, Mehl and DiVincenzo 2013 Phys. Rev. B 87 195309, Hung et al 2014 Phys. Rev. B 90 045308), in particular dephasing due to the presence of nuclear spins, as well as dephasing due to charge noise (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434), fluctuations of the energy levels on each dot due to noisy gate voltages or the environment. Several techniques are discussed which partly decouple the qubit from magnetic noise (Setiawan et al 2014 Phys. Rev. B 89 085314, West and Fong 2012 New J. Phys. 14 083002, Rohling and Burkard 2016 Phys. Rev. B 93 205434) while for charge noise it is shown that it is favorable to operate the qubit on the so-called '(double) sweet spots' (Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434, Malinowski et al 2017 arXiv: 1704.01298), which are least susceptible to noise, thus providing a longer lifetime of the qubit.",
"title": ""
}
] |
scidocsrr
|
ccb7b5911f9cf9996be65752c1b4c275
|
Recognition and retrieval of mathematical expressions
|
[
{
"docid": "2511dfd2f00448125ef1ea28d84a7439",
"text": "Libraries and other institutions are interested in providing access to scanned versions of their large collections of handwritten historical manuscripts on electronic media. Convenient access to a collection requires an index, which is manually created at great labour and expense. Since current handwriting recognizers do not perform well on historical documents, a technique called word spotting has been developed: clusters with occurrences of the same word in a collection are established using image matching. By annotating “interesting” clusters, an index can be built automatically. We present an algorithm for matching handwritten words in noisy historical documents. The segmented word images are preprocessed to create sets of 1-dimensional features, which are then compared using dynamic time warping. We present experimental results on two different data sets from the George Washington collection. Our experiments show that this algorithm performs better and is faster than competing matching techniques.",
"title": ""
}
] |
[
{
"docid": "c07cb4fee98fd54b21f2f46b7384f171",
"text": "This study was conducted to provide basic data as part of a project to distinguish naturally occurring organic acids from added preservatives. Accordingly, we investigated naturally occurring levels of sorbic, benzoic and propionic acids in fish and their processed commodities. The levels of sorbic, benzoic and propionic acids in 265 fish and their processed commodities were determined by high-performance liquid chromatography-photodiode detection array (HPLC-PDA) of sorbic and benzoic acids and gas chromatography-mass spectrometry (GC/MS) of propionic acid. For propionic acid, GC-MS was used because of its high sensitivity and selectivity in complicated matrix samples. Propionic acid was detected in 36.6% of fish samples and 50.4% of processed fish commodities. In contrast, benzoic acid was detected in 5.6% of fish samples, and sorbic acid was not detected in any sample. According to the Korean Food and Drug Administration (KFDA), fishery products and salted fish may only contain sorbic acid in amounts up to 2.0 g kg-1 and 1.0 g kg-1, respectively. The results of the monitoring in this study can be considered violations of KFDA regulations (total 124; benzoic acid 8, propionic acid 116). However, it is difficult to distinguish naturally generated organic acids and artificially added preservatives in fishery products. Therefore, further studies are needed to extend the database for distinction of naturally generated organic acids and added preservatives.",
"title": ""
},
{
"docid": "291628b7e68f897bf23ca1ad1c0fdcfd",
"text": "Device-free Passive (DfP) human detection acts as a key enabler for emerging location-based services such as smart space, human-computer interaction, and asset security. A primary concern in devising scenario-tailored detecting systems is coverage of their monitoring units. While disk-like coverage facilitates topology control, simplifies deployment analysis, and is crucial for proximity-based applications, conventional monitoring units demonstrate directional coverage due to the underlying transmitter-receiver link architecture. To achieve omnidirectional coverage under such link-centric architecture, we propose the concept of omnidirectional passive human detection. The rationale is to exploit the rich multipath effect to blur the directional coverage. We harness PHY layer features to robustly capture the fine-grained multipath characteristics and virtually tune the shape of the coverage of the monitoring unit, which is previously prohibited with mere MAC layer RSSI. We design a fingerprinting scheme and a threshold-based scheme with off-the-shelf WiFi infrastructure and evaluate both schemes in typical clustered indoor scenarios. Experimental results demonstrate an average false positive of 8 percent and an average false negative of 7 percent for fingerprinting in detecting human presence in 4 directions. And both average false positive and false negative remain around 10 percent even with threshold-based methods.",
"title": ""
},
{
"docid": "4e2bfd87acf1287f36694634a6111b3f",
"text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.",
"title": ""
},
{
"docid": "a27b626618e225b03bec1eea8327be4d",
"text": "As a fundamental preprocessing of various multimedia applications, object proposal aims to detect the candidate windows possibly containing arbitrary objects in images with two typical strategies, window scoring and grouping. In this paper, we first analyze the feasibility of improving object proposal performance by integrating window scoring and grouping strategies. Then, we propose a novel object proposal method for RGB-D images, named elastic edge boxes. The initial bounding boxes of candidate object regions are efficiently generated by edge boxes, and further adjusted by grouping the super-pixels within elastic range to obtain more accurate candidate windows. To validate the proposed method, we construct the largest RGB-D image data set NJU1800 for object proposal with balanced object number distribution. The experimental results show that our method can effectively and efficiently generate the candidate windows of object regions and it outperforms the state-of-the-art methods considering both accuracy and efficiency.",
"title": ""
},
{
"docid": "40c88fe58f655c20844baadaa310abaa",
"text": "Pleated pneumatic artificial muscles (PPAMs), which have recently been developed at the Vrije Universiteit Brussel, Department of Mechanical Engineering are brought forward as robotic actuators in this paper. Their distinguishing feature is their pleated design, as a result of which their contraction forces and maximum displacement are very high compared to other pneumatic artificial muscles. The PPAM design, operation and characteristics are presented. To show how well they are suited for robotics, a rotative joint actuator, made of two antagonistically coupled PPAMs, is discussed. It has several properties that are similar to those of skeletal joint actuators. Positioning tasks are seen to be performed very accurately using simple PI control. Furthermore, the antagonistic actuator can easily be made to have a soft or careful touch, contributing greatly to a safe robot operation. In view of all characteristics PPAMs are very well suited for automation and robotic applications.",
"title": ""
},
{
"docid": "de83d02f5f120163ed86050ee6962f50",
"text": "Researchers have recently questioned the benefits associated with having high self-esteem. The authors propose that the importance of self-esteem lies more in how people strive for it rather than whether it is high or low. They argue that in domains in which their self-worth is invested, people adopt the goal to validate their abilities and qualities, and hence their self-worth. When people have self-validation goals, they react to threats in these domains in ways that undermine learning; relatedness; autonomy and self-regulation; and over time, mental and physical health. The short-term emotional benefits of pursuing self-esteem are often outweighed by long-term costs. Previous research on self-esteem is reinterpreted in terms of self-esteem striving. Cultural roots of the pursuit of self-esteem are considered. Finally, the alternatives to pursuing self-esteem, and ways of avoiding its costs, are discussed.",
"title": ""
},
{
"docid": "c5129d0acd299dcefb3be08caf7ef0b9",
"text": "Automatically detecting human social intentions and attitudes from spoken conversation is an important task for speech processing nd social computing. We describe a system for detecting interpersonal stance: whether a speaker is flirtatious, friendly, awkward, or ssertive. We make use of a new spoken corpus of over 1000 4-min speed-dates. Participants rated themselves and their interlocutors or these interpersonal stances, allowing us to build detectors for style both as interpreted by the speaker and as perceived by the earer. We use lexical, prosodic, and dialog features in an SVM classifier to detect very clear styles (the strongest 10% in each stance) ith up to 75% accuracy on previously seen speakers (50% baseline) and up to 59% accuracy on new speakers (48% baseline). feature analysis suggests that flirtation is marked by joint focus on the woman as a target of the conversation, awkwardness by ecreased speaker involvement, and friendliness by a conversational style including other-directed laughter and appreciations. Our ork has implications for our understanding of interpersonal stance, their linguistic expression, and their automatic extraction. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1ef2e54d021f9d149600f0bc7bebb0cd",
"text": "The field of open-domain conversation generation using deep neural networks has attracted increasing attention from researchers for several years. However, traditional neural language models tend to generate safe, generic reply with poor logic and no emotion. In this paper, an emotional conversation generation orientated syntactically constrained bidirectional-asynchronous framework called E-SCBA is proposed to generate meaningful (logical and emotional) reply. In E-SCBA, pre-generated emotion keyword and topic keyword are asynchronously introduced into the reply during the generation, and the process of decoding is much different from the most existing methods that generates reply from the first word to the end. A newly designed bidirectional-asynchronous decoder with the multi-stage strategy is proposed to support this idea, which ensures the fluency and grammaticality of reply by making full use of syntactic constraint. Through the experiments, the results show that our framework not only improves the diversity of replies, but gains a boost on both logic and emotion compared with baselines as well.",
"title": ""
},
{
"docid": "c948f838b19bd944fba4e8e985e716f7",
"text": "Network traffic classification, which has numerous applications from security to billing and network provisioning, has become a cornerstone of today’s computer networks. Previous studies have developed traffic classification techniques using classical machine learning algorithms and deep learning methods when large quantities of labeled data are available. However, capturing large labeled datasets is a cumbersome and time-consuming process. In this paper, we propose a semi-supervised approach that obviates the need for large labeled datasets. We first pre-train a model on a large unlabeled dataset where the input is the time series features of a few sampled packets. Then the learned weights are transferred to a new model that is re-trained on a small labeled dataset. We show that our semi-supervised approach achieves almost the same accuracy as a fully-supervised method with a large labeled dataset, though we use only 20 samples per class. In tests based on a dataset generated from the more challenging QUIC protocol, our approach yields 98% accuracy. To show its efficacy, we also test our approach on two public datasets. Moreover, we study three different sampling techniques and demonstrate that sampling packets from an arbitrary portion of a flow is sufficient for classification.",
"title": ""
},
{
"docid": "bfef5aaa8bbe366bc2a680675e9b2e82",
"text": "Traditional approaches to the study of cognition emphasize an information-processing view that has generally excluded emotion. In contrast, the recent emergence of cognitive neuroscience as an inspiration for understanding human cognition has highlighted its interaction with emotion. This review explores insights into the relations between emotion and cognition that have resulted from studies of the human amygdala. Five topics are explored: emotional learning, emotion and memory, emotion's influence on attention and perception, processing emotion in social stimuli, and changing emotional responses. Investigations into the neural systems underlying human behavior demonstrate that the mechanisms of emotion and cognition are intertwined from early perception to reasoning. These findings suggest that the classic division between the study of emotion and cognition may be unrealistic and that an understanding of human cognition requires the consideration of emotion.",
"title": ""
},
{
"docid": "d994b23ea551f23215232c0771e7d6b3",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "6789e2e452a19da3a00b95a27994ee62",
"text": "Reflection in healthcare education is an emerging topic with many recently published studies and reviews. This current systematic review of reviews (umbrella review) of this field explores the following aspects: which definitions and models are currently in use; how reflection impacts design, evaluation, and assessment; and what future challenges must be addressed. Nineteen reviews satisfying the inclusion criteria were identified. Emerging themes include the following: reflection is currently regarded as self-reflection and critical reflection, and the epistemology-of-practice notion is less in tandem with the evidence-based medicine paradigm of modern science than expected. Reflective techniques that are recognised in multiple settings (e.g., summative, formative, group vs. individual) have been associated with learning, but assessment as a research topic, is associated with issues of validity, reliability, and reproducibility. Future challenges include the epistemology of reflection in healthcare education and the development of approaches for practising and assessing reflection without loss of theoretical background.",
"title": ""
},
{
"docid": "61ffc67f0e242afd8979d944cbe2bff4",
"text": "Diprosopus is a rare congenital malformation associated with high mortality. Here, we describe a patient with diprosopus, multiple life-threatening anomalies, and genetic mutations. Prenatal diagnosis and counseling made a beneficial impact on the family and medical providers in the care of this case.",
"title": ""
},
{
"docid": "1b314c55b86355e1fd0ef5d5ce9a89ba",
"text": "3D printing technology is rapidly maturing and becoming ubiquitous. One of the remaining obstacles to wide-scale adoption is that the object to be printed must fit into the working volume of the 3D printer. We propose a framework, called Chopper, to decompose a large 3D object into smaller parts so that each part fits into the printing volume. These parts can then be assembled to form the original object. We formulate a number of desirable criteria for the partition, including assemblability, having few components, unobtrusiveness of the seams, and structural soundness. Chopper optimizes these criteria and generates a partition either automatically or with user guidance. Our prototype outputs the final decomposed parts with customized connectors on the interfaces. We demonstrate the effectiveness of Chopper on a variety of non-trivial real-world objects.",
"title": ""
},
{
"docid": "9377b5e71b5b176cb422a44c962da4e4",
"text": "Power Gating has become one of the most widely used circuit design techniques for reducing leakage current. Its concept is very simple, but its application to standard-cell VLSI designs involves many careful considerations. The great complexity of designing a power-gated circuit originates from the side effects of inserting current switches, which have to be resolved by a combination of extra circuitry and customized tools and methodologies. In this tutorial we survey these design considerations and look at the best practice within industry and academia. Topics include output isolation and data retention, current switch design and sizing, and physical design issues such as power networks, increases in area and wirelength, and power grid analysis. Designers can benefit from this tutorial by obtaining a better understanding of implications of power gating during an early stage of VLSI designs. We also review the ways in which power gating has been improved. These include reducing the sizes of switches, cutting transition delays, applying power gating to smaller blocks of circuitry, and reducing the energy dissipated in mode transitions. Power Gating has also been combined with other circuit techniques, and these hybrids are also reviewed. Important open problems are identified as a stimulus to research.",
"title": ""
},
{
"docid": "b2e02a1818f862357cf5764afa7fa197",
"text": "The goal of this paper is the automatic identification of characters in TV and feature film material. In contrast to standard approaches to this task, which rely on the weak supervision afforded by transcripts and subtitles, we propose a new method requiring only a cast list. This list is used to obtain images of actors from freely available sources on the web, providing a form of partial supervision for this task. In using images of actors to recognize characters, we make the following three contributions: (i) We demonstrate that an automated semi-supervised learning approach is able to adapt from the actor’s face to the character’s face, including the face context of the hair; (ii) By building voice models for every character, we provide a bridge between frontal faces (for which there is plenty of actor-level supervision) and profile (for which there is very little or none); and (iii) by combining face context and speaker identification, we are able to identify characters with partially occluded faces and extreme facial poses. Results are presented on the TV series ‘Sherlock’ and the feature film ‘Casablanca’. We achieve the state-of-the-art on the Casablanca benchmark, surpassing previous methods that have used the stronger supervision available from transcripts.",
"title": ""
},
{
"docid": "8ab6ea781b35e1ac1784ddac4cdd9fd7",
"text": "PURPOSE\nThe identification of a quantifiable dose-response relationship for strength training is important to the prescription of proper training programs. Although much research has been performed examining strength increases with training, taken individually, they provide little insight into the magnitude of strength gains along the continuum of training intensities, frequencies, and volumes. A meta-analysis of 140 studies with a total of 1433 effect sizes (ES) was carried out to identify the dose-response relationship.\n\n\nMETHODS\nStudies employing a strength-training intervention and containing data necessary to calculate ES were included in the analysis.\n\n\nRESULTS\nES demonstrated different responses based on the training status of the participants. Training with a mean intensity of 60% of one repetition maximum elicits maximal gains in untrained individuals, whereas 80% is most effective in those who are trained. Untrained participants experience maximal gains by training each muscle group 3 d.wk and trained individuals 2 d.wk. Four sets per muscle group elicited maximal gains in both trained and untrained individuals.\n\n\nCONCLUSION\nThe dose-response trends identified in this analysis support the theory of progression in resistance program design and can be useful in the development of training programs designed to optimize the effort to benefit ratio.",
"title": ""
},
{
"docid": "887f8d2ad0d688ab8046ac9951c468a8",
"text": "Wearable sensor technologies are essential to the realization of personalized medicine through continuously monitoring an individual’s state of health. Sampling human sweat, which is rich in physiological information, could enable non-invasive monitoring. Previously reported sweat-based and other non-invasive biosensors either can only monitor a single analyte at a time or lack on-site signal processing circuitry and sensor calibration mechanisms for accurate analysis of the physiological state. Given the complexity of sweat secretion, simultaneous and multiplexed screening of target biomarkers is critical and requires full system integration to ensure the accuracy of measurements. Here we present a mechanically flexible and fully integrated (that is, no external analysis is needed) sensor array for multiplexed in situ perspiration analysis, which simultaneously and selectively measures sweat metabolites (such as glucose and lactate) and electrolytes (such as sodium and potassium ions), as well as the skin temperature (to calibrate the response of the sensors). Our work bridges the technological gap between signal transduction, conditioning (amplification and filtering), processing and wireless transmission in wearable biosensors by merging plastic-based sensors that interface with the skin with silicon integrated circuits consolidated on a flexible circuit board for complex signal processing. This application could not have been realized using either of these technologies alone owing to their respective inherent limitations. The wearable system is used to measure the detailed sweat profile of human subjects engaged in prolonged indoor and outdoor physical activities, and to make a real-time assessment of the physiological state of the subjects. This platform enables a wide range of personalized diagnostic and physiological monitoring applications.",
"title": ""
},
{
"docid": "e72f3c598623b6d226c0aca982aecd7d",
"text": "Researchers in the ontology-design field have developed the content for ontologies in many domain areas. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. We developed a suite of tools for managing multiple ontologies. These suite provides users with a uniform framework for comparing, aligning, and merging ontologies, maintaining versions, translating between different formalisms. Two of the tools in the suite support semi-automatic ontology merging: iPrompt is an interactive ontologymerging tool that guides the user through the merging process, presenting him with suggestions for next steps and identifying inconsistencies and potential problems. AnchorPrompt uses a graph structure of ontologies to find correlation between concepts and to provide additional information for iPrompt. 1 1 Managing Multiple Ontologies Researchers have pursued development of ontologies—explicit formal specifications of domains of discourse—on the premise that ontologies facilitate knowledge sharing and reuse (Musen, 1992; Gruber, 1993). Today, ontology development is moving from academic knowledge-representation projects to the world of e-commerce. Companies use ontologies to share information and to guide customers through their Web sites. Ontologies on the World-Wide Web range from large taxonomies categorizing Web sites (such as Yahoo!) to categorizations of products for sale and their features (such as Amazon.com). In an effort to enable machine-interpretable representation of knowledge on the Web, the WWW Consortium has developed the Resource Description Framework (W3C, 2000), a language for encoding semantic information on Web pages. The WWW consortium is also working on OWL, a more high-level language for semantic annotation on the Web.1 Such encoding makes it possible for electronic agents searching for information to share the common understanding of the semantics of the data represented on the Web. Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Spackman, 2000) and the semantic network of the Unified Medical Language System (Lindberg et al., 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). With this widespread distributed use of ontologies, different parties inevitably develop ontologies http://www.w3.org/2001/sw/WebOnt/",
"title": ""
}
] |
scidocsrr
|
ccf6745565e049983139ff64fbfd9d88
|
c-Ha-ras oncogene expression in immortalized human keratinocytes (HaCaT) alters growth potential in vivo but lacks correlation with malignancy.
|
[
{
"docid": "a00d2d9dde3f767ce6b7308a9cdd8f03",
"text": "Using an improved method of gel electrophoresis, many hitherto unknown proteins have been found in bacteriophage T4 and some of these have been identified with specific gene products. Four major components of the head are cleaved during the process of assembly, apparently after the precursor proteins have assembled into some large intermediate structure.",
"title": ""
}
] |
[
{
"docid": "d63609f3850ceb80945ab72b242fcfe3",
"text": "Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The objective of this paper is to increase our understanding of the practical benefits that the MCR process produces on reviewed source code. To that end, we empirically explore the problems fixed through MCR in OSS systems. We manually classified over 1,400 changes taking place in reviewed code from two OSS projects into a validated categorization scheme. Surprisingly, results show that the types of changes due to the MCR process in OSS are strikingly similar to those in the industry and academic systems from literature, featuring the similar 75:25 ratio of maintainability-related to functional problems. We also reveal that 7–35% of review comments are discarded and that 10–22% of the changes are not triggered by an explicit review comment. Patterns emerged in the review data; we investigated them revealing the technical factors that influence the number of changes due to the MCR process. We found that bug-fixing tasks lead to fewer changes and tasks with more altered files and a higher code churn have more changes. Contrary to intuition, the person of the reviewer had no impact on the number of changes.",
"title": ""
},
{
"docid": "a7d8e333afb14c90c551bd0ad67dbdc7",
"text": "The consensus algorithm for the medical management of type 2 diabetes was published in August 2006 with the expectation that it would be updated, based on the availability of new interventions and new evidence to establish their clinical role. The authors continue to endorse the principles used to develop the algorithm and its major features. We are sensitive to the risks of changing the algorithm cavalierly or too frequently, without compelling new information. An update to the consensus algorithm published in January 2008 specifically addressed safety issues surrounding the thiazolidinediones. In this revision, we focus on the new classes of medications that now have more clinical data and experience.",
"title": ""
},
{
"docid": "c7ad0b0ed2db2a256609a137b24359e3",
"text": "PREFACE This report is a revised and extended edition of an earlier report. In 1994, the present author, with colleagues Lawrence Schneider and Leda Ricci from the University of Michigan Transportation Research Institute (UMTRI), undertook a thorough review of the academic and technical literature relating to automotive seat design, with a particular emphasis on accommodation and comfort. In the intervening six years, attention to seating issues has increased, to judge from the volume of related material published at SAE meetings, in ergonomics journals, and in other forums. Technological advances have increased the use of seat surface pressure distribution in seat design and assessment. Recently, a new H-point manikin was developed at UMTRI under the auspices of the Automotive Seat and Package Evaluation and Comparison Tools (ASPECT) program. As part of the program, several detailed studies of the interaction between vehicle occupants and seats were conducted that led to new insights into how seat geometry and stiffness affects occupant posture. The updated and expanded version of this report was undertaken to incorporate the findings of the ASPECT program and other studies that have been reported in the open literature since 1994. The anthropometric analyses supporting seat dimension recommendations have been revised with reference to the latest U.S. civilian anthropometry, the third National Health and Nutrition Examination Survey (NHANES III), conducted from 1988 to 1994. Although anthropometric analyses from the survey have not yet been published, the data have recently been made available and were analyzed for this report. Much of the new information in this report is based on research conducted at UMTRI during the past six years under sponsorship of auto industry companies. ii iii CONTENTS",
"title": ""
},
{
"docid": "b94e282e67b39bf0d28b0dc0b95a07e1",
"text": "This paper presents an “MBSE-Assisted FMEA” approach. The approach is evolutionary in nature and accommodates the current NASA and Air Force FMEA procedures and practices required for many of their contracts; therefore, it can be implemented immediately. The approach considers current MBSE philosophy, methods, and tools, introducing them gradually to evolve the current FMEA process to a model-based failure modes, causes, and effects generation process. To assist the implementation of this transitioning process from current FMEA to the Model-Based FMEA, the paper discusses challenges and opportunities/benefits that can help establish a feasible and viable near term and long term implementation plan. With the MBSE-Assisted FMEA as a near term approach and Model-Based FMEA as a longer term implementation goal, FMEA will become a more effective and cost efficient reliability process that is integrated with MBSE to support Design-for-Reliability and Model-Based Mission Assurance activities.",
"title": ""
},
{
"docid": "78bdb038e90ab5a7518df0849ba2b698",
"text": "OBJECTIVE\nWe aimed to confirm the effect of combined treatment with celecoxib and rebamipide would be more effective than celecoxib alone for prevention of upper gastrointestinal (GI) events.\n\n\nMETHODS\nPatients with rheumatoid arthritis, osteoarthritis, and low back pain were enrolled in this study. Patients were randomized to two groups: a monotherapy group (100 mg celecoxib twice daily) and a combination therapy group (add on 100 mg of rebamipide three times a day). The GI mucosal injury was evaluated by endoscopic examination before treatment and at 3 months. The primary endpoint was to evaluate the preventive effect of the combination therapy group for GI events, endoscopic upper GI ulcers and intolerable GI symptoms, compared with the monotherapy group.\n\n\nRESULTS\nSeventy-five patients were enrolled. Sixty-five patients were analyzed (16 males, 49 females; mean age: 67 ± 13 years). The prevalence of upper GI events, five of endoscopic GI ulcers and one of intolerable GI symptoms, were 6/34 (17.6%) in the monotherapy group and 0/31 in the combination therapy group, p = 0.0252.\n\n\nCONCLUSIONS\nThe combination therapy group was more effective than the monotherapy group for prevention of upper GI events in this study. Rebamipide might be a candidate for an option to prevent COX-2 selective inhibitor-induced upper GI events.",
"title": ""
},
{
"docid": "84f688155a92ed2196974d24b8e27134",
"text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the",
"title": ""
},
{
"docid": "53cf1514165b5f2402ad33831eb1ea16",
"text": "We propose in this work a patch-based image labeling method relying on a label propagation framework. Based on image intensity similarities between the input image and an anatomy textbook, an original strategy which does not require any nonrigid registration is presented. Following recent developments in nonlocal image denoising, the similarity between images is represented by a weighted graph computed from an intensity-based distance between patches. Experiments on simulated and in vivo magnetic resonance images show that the proposed method is very successful in providing automated human brain labeling.",
"title": ""
},
{
"docid": "6ebd75996b8a652720b23254c9d77be4",
"text": "This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based",
"title": ""
},
{
"docid": "8774c5a504e2d04e8a49e3625327828a",
"text": "Forest fire prediction constitutes a significant component of forest fire management. It plays a major role in resource allocation, mitigation and recovery efforts. This paper presents a description and analysis of forest fire prediction methods based on artificial intelligence. A novel forest fire risk prediction algorithm, based on support vector machines, is presented. The algorithm depends on previous weather conditions in order to predict the fire hazard level of a day. The implementation of the algorithm using data from Lebanon demonstrated its ability to accurately predict the hazard of fire occurrence.",
"title": ""
},
{
"docid": "46df34ed9fb6abcc0e6250972fca1faa",
"text": "Reliable, scalable and secured framework for predicting Heart diseases by mining big data is designed. Components of Apache Hadoop are used for processing of big data used for prediction. For increasing the performance, scalability, and reliability Hadoop clusters are deployed on Google Cloud Storage. Mapreduce based Classification via clustering method is proposed for efficient classification of instances using reduced attributes. Mapreduce based C 4.5 decision tree algorithm is improved and implemented to classify the instances. Datasets are analyzed on WEKA (Waikato Environment for Knowledge Analysis) and Hadoop. Classification via clustering method performs classification with 98.5% accuracy on WEKA with reduced attributes. On Mapreduce paradigm using this approach execution time is improved. With clustered instances 49 nodes of decision tree are reduced to 32 and execution time of Mapreduce program is reduced from 113 seconds to 84 seconds. Mapreduce based decision trees present classification of instances more accurately as compared to WEKA based decision trees.",
"title": ""
},
{
"docid": "b889b863e0344361be7d8eeafca872c5",
"text": "This paper presents a singular-value-based semi-fragile watermarking scheme for image content authentication. The proposed scheme generates secure watermark by performing a logical operation on content-dependent watermark generated by a singular-value-based sequence and contentindependent watermark generated by a private-key-based sequence. It next employs the adaptive quantization method to embed secure watermark in approximation subband of each 4 4 block to generate the watermarked image. The watermark extraction process then extracts watermark using the parity of quantization results from the probe image. The authentication process starts with regenerating secure watermark following the same process. It then constructs error maps to compute five authentication measures and performs a three-level process to authenticate image content and localize tampered areas. Extensive experimental results show that the proposed scheme outperforms five peer schemes and its two variant systems and is capable of identifying intentional tampering, incidental modification, and localizing tampered regions under mild to severe content-preserving modifications. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "946a7243eb00f84d7ce1a804f4a86d51",
"text": "This paper seeks to contribute to the growing literature on children and computer programming by focusing on a programming language for children in Kindergarten through second grade. Sixty-two students were exposed to a 6-week curriculum using ScartchJr. They learned foundational programming concepts and applied those concepts to create personally meaningful projects using the ScratchJr programming app. This paper addresses the following research question: Which ScratchJr programming blocks do young children choose to use in their own projects after they have learned them all through a tailored programming curriculum? Data was collected in the form of the students’ combined 977 projects, and analyzed for patterns and differences across grades. This paper summarizes findings and suggests potential directions for future research. Implications for the use of ScratchJr as an introductory programming language for young children are also discussed.",
"title": ""
},
{
"docid": "cfd01fa97733c0df6e07b3b7ddebb4e2",
"text": "Radio frequency identification (RFID) is an emerging technology in the building industry. Many researchers have demonstrated how to enhance material control or production management with RFID. However, there is a lack of integrated understanding of lifecycle management. This paper develops and demonstrates a framework to Information Lifecycle Management (ILM) with RFID for material control. The ILM framework includes key RFID checkpoints and material types to facilitate material control on construction sites. In addition, this paper presents a context-aware scenario to examine multiple on-site context and RFID parameters. From tagging nodes at the factory to reading nodes at each lifecycle stage, this paper demonstrates how to manage complex construction materials with RFID and how to construct integrated information flows at different lifecycle stages. To validate key material types and the scenario, the study reports on two on-site trials: read distance test and on-site simulation. Finally, the research provides discussion and recommended approaches to implementing ILM. The results show that the ILM framework has the potential for a variety of stakeholders to adopt RFID in the building industry. This paper provides the understanding about the effectiveness of ILM with RFID for material control, which can serve as a base for adopting other IT technologies in the building industry. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "313b4f6832d45a428fe264cc16e6ff9f",
"text": "This theme issue provides a comprehensive collection of original research articles on the creation of diverse types of theranostic upconversion nanoparticles, their fundamental interactions in biology, as well as their biophotonic applications in noninvasive diagnostics and therapy.",
"title": ""
},
{
"docid": "5201e874c8751cddba3a358a2f1df998",
"text": "Ururau is a free and open-source software written in the Java programming language. The use of the software allows the construction of discrete simulation models on the three layers that constitute the structure of the software. It means that the models can be developed in the upper layer of the graphic interface faster and of simple programming in the core of the software (new commands for example) or in the lower layer of the software libraries. The use of the Ururau has made the accomplishment of research of new functions and algorithms in the field of discrete event systems simulation possible.",
"title": ""
},
{
"docid": "866b95a50dede975eeff9aeec91a610b",
"text": "In this paper, we focus on differential privacy preserving spectral graph analysis. Spectral graph analysis deals with the analysis of the spectra (eigenvalues and eigenvector components) of the graph’s adjacency matrix or its variants. We develop two approaches to computing the ε-differential eigen decomposition of the graph’s adjacency matrix. The first approach, denoted as LNPP, is based on the Laplace Mechanism that calibrates Laplace noise on the eigenvalues and every entry of the eigenvectors based on their sensitivities. We derive the global sensitivities of both eigenvalues and eigenvectors based on the matrix perturbation theory. Because the output eigenvectors after perturbation are no longer orthogonormal, we postprocess the output eigenvectors by using the state-of-the-art vector orthogonalization technique. The second approach, denoted as SBMF, is based on the exponential mechanism and the properties of the matrix Bingham-von Mises-Fisher density for network data spectral analysis. We prove that the sampling procedure achieves differential privacy. We conduct empirical evaluation on a real social network data and compare the two approaches in terms of utility preservation (the accuracy of spectra and the accuracy of low rank approximation) under the same differential privacy threshold. Our empirical evaluation results show that LNPP generally incurs smaller utility loss.",
"title": ""
},
{
"docid": "cc9ff40f0c210ad0669bce44b5043e48",
"text": "Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).",
"title": ""
},
{
"docid": "604acce1aeb26ea5b6a72e230752ff60",
"text": "Research in experimental psychology suggests that, in violation of Bayes' rule, most people tend to \"overreact\" to unexpected and dramatic news events. This study of market efficiency investigates whether such behavior affects stock prices. The empirical evidence, based on CRSP monthly return data, is consistent with the overreaction hypothesis. Substantial weak form market inefficiencies are discovered. The results also shed new light on the January returns earned by prior \"winners\" and \"losers.\" Portfolios of losers experience exceptionally large January returns as late as five years after portfolio formation. As ECONOMISTS INTERESTED IN both market behavior and the psychology of individual decision making, we have been struck by the similarity of two sets of empirical findings. Both classes of behavior can be characterized as displaying ouerreaction. This study was undertaken to investigate the possibility that these phenomena are related by more than just appearance. We begin by describing briefly the individual and market behavior that piqued our interest. The term overreaction carries with it an implicit comparison to some degree of reaction that is considered to be appropriate. What is an appropriate reaction? One class.of tasks which have a well-established norm are probability revision problems for which Bayes' rule prescribes the correct reaction to new information. It has now been well-established that Bayes' rule is not an apt characterization of how individuals actually respond to new data (Kahneman et al. [14]). In revising their beliefs, individuals tend to overweight recent information and underweight prior (or base rate) data. People seem to make predictions according to a simple matching rule: \"The predicted value is selected so that the standing of the case in the distribution of outcomes matches its standing in the distribution of impressions\" (Kahneman and Tversky [14, p. 4161). This rule-of-thumb, an instance of what Kahneman and Tversky call the representativeness heuristic, violates the basic statistical principal that the extremeness of predictions must be moderated by considerations of predictability. Grether [12] has replicated this finding under incentive compatible conditions. There is also considerable evidence that the actual expectations of professional security analysts and economic forecasters display the same overreaction bias (for a review, see De Bondt [7]). One of the earliest observations about overreaction in markets was made by J. M. Keynes:\". . .day-to-day fluctuations in the profits of existing investments, * University of Wisconsin at Madison and Cornell University, respectively. The financial support of the C.I.M. Doctoral Fellowship Program (Brussels, Belgium) and the Cornell Graduate School of Management is gratefully acknowledged. We received helpful comments from Seymour Smidt, Dale Morse, Peter Bernstein, Fischer Black, Robert Jarrow, Edwin Elton, and Ross Watts. 794 The Journal of Finance which are obviously of an ephemeral and nonsignificant character, tend to have an altogether excessive, and even an absurd, influence on the market\" [17, pp. 153-1541. About the same time, Williams noted in this Theory of Investment Value that \"prices have been based too much on current earning power and too little on long-term dividend paying power\" [28, p. 191. More recently, Arrow has concluded that the work of Kahneman and Tversky \"typifies very precisely the exessive reaction to current information which seems to characterize all the securities and futures markets\" [I, p. 51. Two specific examples of the research to which Arrow was referring are the excess volatility of security prices and the so-called price earnings ratio anomaly. The excess volatility issue has been investigated most thoroughly by Shiller [27]. Shiller interprets the Miller-Modigliani view of stock prices as a constraint on the likelihood function of a price-dividend sample. Shiller concludes that, at least over the last century, dividends simply do not vary enough to rationally justify observed aggregate price movements. Combining the results with Kleidon's [18] findings that stock price movements are strongly correlated with the following year's earnings changes suggests a clear pattern of overreaction. In spite of the observed trendiness of dividends, investors seem to attach disproportionate importance to short-run economic developments.' The price earnings ratio (PIE) anomaly refers to the observation that stocks with extremely low PIE ratios (i.e., lowest decile) earn larger risk-adjusted returns than high PIE stocks (Basu [3]). Most financial economists seem to regard the anomaly as a statistical artifact. Explanations are usually based on alleged misspecification of the capital asset pricing model (CAPM). Ball [2] emphasizes the effects of omitted risk factors. The PIE ratio is presumed to be a proxy for some omitted factor which, if included in the \"correct\" equilibrium valuation model, would eliminate the anomaly. Of course, unless these omitted factors can be identified, the hypothesis is untestable. Reinganum [21] has claimed that the small firm effect subsumes the PIE effect and that both are related to the same set of missing (and again unknown) factors. However, Basu [4] found a significant PIE effect after controlling for firm size, and earlier Graham [ l l ] even found an effect within the thirty Dow Jones Industrials, hardly a group of small firms! An alternative behavioral explanation for the anomaly based on investor overreaction is what Basu called the \"price-ratio\" hypothesis (e.g., Dreman [8]). Companies with very low PIE'Sare thought to be temporarily \"undervalued\" because investors become excessively pessimistic after a series of bad earnings reports or other bad news. Once future earnings turn out to be better than the unreasonably gloomy forecasts, the price adjusts. Similarly, the equity of companies with very high PIE'Sis thought to be \"overvalued,\" before (predictably) falling in price. While the overreaction hypothesis has considerable a priori appeal, the obvious question to ask is: How does the anomaly survive the process of arbitrage? There Of course, the variability of stock prices may also reflect changes in real interest rates. If so, the price movements of other assets-such as land or housing-should match those of stocks. However, this is not actually observed. A third hypothesis, advocated by Marsh and Merton [19], is that Shiller's findings are a result of his misspecification of the dividend process. 795 Does the Stock Market Overreact? is really a more general question here. What are the equilibria conditions for markets in which some agents are not rational in the sense that they fail to revise their expectations according to Bayes' rule? Russell and Thaler [24] address this issue. They conclude that the existence of some rational agents is not sufficient to guarantee a rational expectations equilibrium in an economy with some of what they call quasi-rational agents. (The related question of market equilibria with agents having heterogeneous expectations is investigated by Jarrow [13].) While we are highly sensitive to these issues, we do not have the space to address them here. Instead, we will concentrate on an empirical test of the overreaction hypothesis. If stock prices systematically overshoot, then their reversal should be predictable from past return data alone, with no use of any accounting data such as earnings. Specifically, two hypotheses are suggested: (1)Extreme movements in stock prices will be followed by subsequent price movements in the opposite direction. (2) The more extreme the initial price movement, the greater will be the subsequent adjustment. Both hypotheses imply a violation of weak-form market efficiency. To repeat, our goal is to test whether the overreaction hypothesis is predictive. In other words, whether it does more for us than merely to explain, ex post, the PIE effect or Shiller's results on asset price dispersion. The overreaction effect deserves attention because it represents a behavioral principle that may apply in many other contexts. For example, investor overreaction possibly explains Shiller's earlier [26] findings that when long-term interest rates are high relative to short rates, they tend to move down later on. Ohlson and Penman [20] have further suggested that the increased volatility of security returns following stock splits may also be linked to overreaction. The present empirical tests are to our knowledge the first attempt to use a behavioral principle to predict a new market anomaly. The remainder of the paper is organized as follows. The next section describes the actual empirical tests we have performed. Section I1 describes the results. Consistent with the overreaction hypothesis, evidence of weak-form market inefficiency is found. We discuss the implications for other empirical work on asset pricing anomalies. The paper ends with a brief summary of conclusions. I. The Overreaction Hypothesis: Empirical Tests The empirical testing procedures are a variant on a design originally proposed by Beaver and Landsman [5] in a different context. Typically, tests of semistrong form market efficiency start, at time t = 0, with the formation of portfolios on the basis of some event that affects all stocks in the portfolio, say, an earnings announcement. One then goes on to investigate whether later on ( t > 0) the estimated residual portfolio return rip,--measured relative to the single-period CAPM-equals zero. Statistically significant departures from zero are interpreted as evidence consistent with semistrong form market inefficiency, even though the results may also be due to misspecification of the CAPM, misestimation of the relevant alphas and/or betas, or simply market inefficiency of the weak form. 796 The Journal of Finance In contrast, the tests in this study assess the extent to which systematic nonzero",
"title": ""
},
{
"docid": "68cd83d94d67c16b19668f1fba62a50e",
"text": "This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with linear continuous dynamics. The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2018. In its second edition, 9 tools have been applied to solve six different benchmark problems in the category for linear continuous dynamics (in alphabetical order): CORA, CORA/SX, C2E2, Flow*, HyDRA, Hylaa, Hylaa-Continuous, JuliaReach, SpaceEx, and XSpeed. This report is a snapshot of the current landscape of tools and the types of benchmarks they are particularly suited for. Due to the diversity of problems, we are not ranking tools, yet the presented results probably provide the most complete assessment of tools for the safety verification of continuous and hybrid systems with linear continuous dynamics up to this date. G. Frehse (ed.), ARCH18 (EPiC Series in Computing, vol. 54), pp. 23–52 ARCH-COMP18 Linear Dynamics Althoff et al.",
"title": ""
},
{
"docid": "0b5a64aad1ff839cdb2a070a3bbc4d13",
"text": "The use of robots to manipulate surgical instruments inside the patient has already moved from the world of fiction to fact. While the widespread use of full-function surgical robots is still many years away, less sophisticated robots that perform very specific surgical functions are already at a stage where the typical hospital can consider their use. Currently, the most affordable and commonly used type of \"surgical-assist\" robot is the robotic endoscope holder, which is used to hold and position rigid endoscopes during minimally invasive surgery. In this study, we introduce readers to the topic of surgical robotics, focusing specifically on robotic endoscope holders. The study includes a Technology Management Guide, in which we discuss who should and who shouldn't consider implementing such robots, and it includes our evaluation protocol and findings for one such robot, the Computer Motion AESOP 3000. We judged the evaluated system based on its performance relative to the human scope holders it is designed to replace, as well as its safety and ease of use. While we found the AESOP 3000 to be an acceptable, and sometimes preferred, alternative to the use of a human scope holder, we caution that many healthcare facilities won't see sufficient clinical benefit to warrant its purchase at this time.",
"title": ""
}
] |
scidocsrr
|
da09b707800f051d7bf3ee4b62e7b9fb
|
A Single Shot Text Detector with Scale-adaptive Anchors
|
[
{
"docid": "827f1b95a91a402e286085f9531b541e",
"text": "An unconstrained end-to-end text localization and recognition method is presented. The method detects initial text hypothesis in a single pass by an efficient region-based method and subsequently refines the text hypothesis using a more robust local text model, which deviates from the common assumption of region-based methods that all characters are detected as connected components.",
"title": ""
},
{
"docid": "1fb2ec397703e4e77769046d1d347132",
"text": "Many existing scene parsing methods adopt Convolutional Neural Networks with fixed-size receptive fields, which frequently result in inconsistent predictions of large objects and invisibility of small objects. To tackle this issue, we propose a scale-adaptive convolution to acquire flexiblesize receptive fields during scene parsing. Through adding a new scale regression layer, we can dynamically infer the position-adaptive scale coefficients which are adopted to resize the convolutional patches. Consequently, the receptive fields can be adjusted automatically according to the various sizes of the objects in scene images. Thus, the problems of invisible small objects and inconsistent large-object predictions can be alleviated. Furthermore, our proposed scale-adaptive convolutions are not only differentiable to learn the convolutional parameters and scale coefficients in an end-to-end way, but also of high parallelizability for the convenience of GPU implementation. Additionally, since the new scale regression layers are learned implicitly, any extra training supervision of object sizes is unnecessary. Extensive experiments on Cityscapes and ADE20K datasets well demonstrate the effectiveness of the proposed scaleadaptive convolutions.",
"title": ""
}
] |
[
{
"docid": "d38f9ef3248bf54b7a073beaa186ad42",
"text": "Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.",
"title": ""
},
{
"docid": "aac3becdb57fd0488fb3046af4ac95da",
"text": "We introduced some of the basic principles, techniques, and key design issues common to ISPs used in many diverse scientific, military, and commercial applications, and we touched on some of the less intuitive effects that must be dealt with. Many of these effects can influence the initial configuration of the system as well as design details later in the design process. The successful design of an ISP usually requires a multidisciplinary design team. The design of an ISP must often be closely coordinated with that of other major subsystems such as the primary sensor and the optics. The role of the systems engineer in the design process is perhaps the most critical because other members of the team may not be aware of the consequences of many of the effects discussed above. Inertially stabilized platforms (ISPs) are used to stabilize and point a broad array of sensors, cameras, telescopes, and weapon systems.",
"title": ""
},
{
"docid": "c4e7c757ad5a67b550d09f530b5204ef",
"text": "This paper describes our effort for a planning-based computational model of narrative generation that is designed to elicit surprise in the reader's mind, making use of two temporal narrative devices: flashback and foreshadowing. In our computational model, flashback provides a backstory to explain what causes a surprising outcome, while foreshadowing gives hints about the surprise before it occurs. Here, we present Prevoyant, a planning-based computational model of surprise arousal in narrative generation, and analyze the effectiveness of Prevoyant. The work here also presents a methodology to evaluate surprise in narrative generation using a planning-based approach based on the cognitive model of surprise causes. The results of the experiments that we conducted show strong support that Prevoyant effectively generates a discourse structure for surprise arousal in narrative.",
"title": ""
},
{
"docid": "18e019622188ab6ddb2beca69d51e1c9",
"text": "The rhesus macaque (Macaca mulatta) is the most utilized primate model in the biomedical and psychological sciences. Expressive behavior is of interest to scientists studying these animals, both as a direct variable (modeling neuropsychiatric disease, where expressivity is a primary deficit), as an indirect measure of health and welfare, and also in order to understand the evolution of communication. Here, intramuscular electrical stimulation of facial muscles was conducted in the rhesus macaque in order to document the relative contribution of each muscle to the range of facial movements and to compare the expressive function of homologous muscles in humans, chimpanzees and macaques. Despite published accounts that monkeys possess less differentiated and less complex facial musculature, the majority of muscles previously identified in humans and chimpanzees were stimulated successfully in the rhesus macaque and caused similar appearance changes. These observations suggest that the facial muscular apparatus of the monkey has extensive homology to the human face. The muscles of the human face, therefore, do not represent a significant evolutionary departure from those of a monkey species. Thus, facial expressions can be compared between humans and rhesus macaques at the level of the facial musculature, facilitating the systematic investigation of comparative facial communication.",
"title": ""
},
{
"docid": "4b1f3a34a3f2acdfebcc311c507a97f7",
"text": "Planning the path of an autonomous, agile vehicle ina dynamic environment is a very complex problem, especially when the vehicle is required to use its full maneuvering capabilities. Recent efforts aimed at using randomized algorithms for planning the path of kinematic and dynamic vehicles have demonstrated considerable potential for implementation on future autonomous platforms. This paper builds upon these efforts by proposing a randomized path planning architecture for dynamical systems in the presence of xed and moving obstacles. This architecture addresses the dynamic constraints on the vehicle’s motion, and it provides at the same time a consistent decoupling between low-level control and motion planning. The path planning algorithm retains the convergence properties of its kinematic counterparts. System safety is also addressed in the face of nite computation times by analyzing the behavior of the algorithm when the available onboard computation resources are limited, and the planning must be performed in real time. The proposed algorithm can be applied to vehicles whose dynamics are described either by ordinary differential equations or by higher-level, hybrid representations. Simulation examples involving a ground robot and a small autonomous helicopter are presented and discussed.",
"title": ""
},
{
"docid": "8522a5a6727a941611dbbebbe4bb7c11",
"text": "Microblogging encompasses both user-generated content and behavior. When modeling microblogging data, one has to consider personal and background topics, as well as how these topics generate the observed content and behavior. In this article, we propose the Generalized Behavior-Topic (GBT) model for simultaneously modeling background topics and users’ topical interest in microblogging data. GBT considers multiple topical communities (or realms) with different background topical interests while learning the personal topics of each user and the user’s dependence on realms to generate both content and behavior. This differentiates GBT from other previous works that consider either one realm only or content data only. By associating user behavior with the latent background and personal topics, GBT helps to model user behavior by the two types of topics. GBT also distinguishes itself from other earlier works by modeling multiple types of behavior together. Our experiments on two Twitter datasets show that GBT can effectively mine the representative topics for each realm. We also demonstrate that GBT significantly outperforms other state-of-the-art models in modeling content topics and user profiling.",
"title": ""
},
{
"docid": "f7d05b0efbf8fbd46e0294585b6db97c",
"text": "We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: <;uri xlink:href=\"http://live.ece.utexas.edu/research/fog/index.html\" xlink:type=\"simple\">http://live.ece.utexas.edu/research/fog/index.html<;/uri>.",
"title": ""
},
{
"docid": "c6e1c8aa6633ec4f05240de1a3793912",
"text": "Medial prefrontal cortex (MPFC) is among those brain regions having the highest baseline metabolic activity at rest and one that exhibits decreases from this baseline across a wide variety of goal-directed behaviors in functional imaging studies. This high metabolic rate and this behavior suggest the existence of an organized mode of default brain function, elements of which may be either attenuated or enhanced. Extant data suggest that these MPFC regions may contribute to the neural instantiation of aspects of the multifaceted \"self.\" We explore this important concept by targeting and manipulating elements of MPFC default state activity. In this functional magnetic resonance imaging (fMRI) study, subjects made two judgments, one self-referential, the other not, in response to affectively normed pictures: pleasant vs. unpleasant (an internally cued condition, ICC) and indoors vs. outdoors (an externally cued condition, ECC). The ICC was preferentially associated with activity increases along the dorsal MPFC. These increases were accompanied by decreases in both active task conditions in ventral MPFC. These results support the view that dorsal and ventral MPFC are differentially influenced by attentiondemanding tasks and explicitly self-referential tasks. The presence of self-referential mental activity appears to be associated with increases from the baseline in dorsal MPFC. Reductions in ventral MPFC occurred consistent with the fact that attention-demanding tasks attenuate emotional processing. We posit that both self-referential mental activity and emotional processing represent elements of the default state as represented by activity in MPFC. We suggest that a useful way to explore the neurobiology of the self is to explore the nature of default state activity.",
"title": ""
},
{
"docid": "85736b2fd608e3d109ce0f3c46dda9ac",
"text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.",
"title": ""
},
{
"docid": "29822df06340218a43fbcf046cbeb264",
"text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.",
"title": ""
},
{
"docid": "e7d5f456603317b3ccd22c95b8089f8b",
"text": "This paper mainly focusses on the impact of distributed generation and best feeder reconfiguration of distribution system, in order to improve the quality of power in the distribution system. Primarily the goal of this paper is to mitigate as much as possible the losses in power system and improve the voltage profile. The optimization of the system constrained by feeder capability limit, radial configuration format, no load point interruption and load-point voltage limits. Using Hybrid Genetic Algorithm Particle Swarm Optimization (HGAPSO) to find the best configuration. This hybrid optimization or search algorithm has more efficiency and accuracy. Proposed methodology comprises of demonstration of 33-bus radial distribution with and without distributed generation. Study finding consist of the best possible configuration of switches given by optimization algorithm in order to minimize the losses but at the same time respecting all the constrain mention above.",
"title": ""
},
{
"docid": "fd5208bdf27f9531c425ff68bb8a9fad",
"text": "With the development of the Internet of Things technology, smart transportation has been continuously studied in recent years. In order to help people identify the real-time location and expected arrival time of buses, this paper proposes a real-time bus positioning system based on Long Range (LoRa) technology. The system can ease the anxiety of people waiting for buses and make the travel smarter and more convenient. The system is composed of terminal devices, data concentrators, cloud servers, and user interface. The terminal device installed on the bus broadcasts its position data to the data concentrators. Then the data concentrators upload data to the cloud server and present it to users. Compared with the traditional real-time bus positioning system, our system operates in the unlicensed frequency band and has a long transmission distance, which does not require any communication costs and repeaters. Experimental results show that our system has the advantages of low power consumption, low packet loss rate and short time delay.",
"title": ""
},
{
"docid": "f5aab3f627af376bcf4850ce654b70c8",
"text": "Languages are inherently ambiguous. Four out of five words in English have more than one meaning. Nowadays there is a growing number of small proprietary thesauri used for knowledge management for different applications. In order to enable the usage of these thesauri for automatic text annotations, we introduce a robust method for discriminating word senses using hypernyms. The method uses collocations to induce word senses and to discriminate the thesaural sense from the other senses by utilizing hypernym entries taken from a thesaurus. The main novelty of this work is the usage of hypernyms already at the stage sense induction. The hypernyms enable us to cast the task to a binary scenario, namely teasing apart thesaural senses from all the rest. The introduced method outperforms the baseline and has indicates accuracy above 80%.",
"title": ""
},
{
"docid": "58e16ce868473276550f17f19ab9938b",
"text": "By fully exploiting the optical channel properties, we propose in this paper the coherent optical zero padding orthogonal frequency division multiplexing (CO-ZP-OFDM) for future high-speed optical transport networks to increase the spectral efficiency and improve the system reliability. Unlike the periodically inserted training symbols in conventional optical OFDM systems, we design the polarization-time-frequency (PTF) coded pilots scattered within the time-frequency grid of the ZP-OFDM payload symbols to realize low-complexity multiple-input multiple-output (MIMO) channel estimation with high accuracy. Compared with conventional optical OFDM systems, CO-ZP-OFDM improves the spectral efficiency by about 6.62%. Simulation results indicate that the low-density parity-check (LDPC) coded bit error rate of the proposed scheme only suffers from no more than 0.3 dB optical signal-to-noise ratio (OSNR) loss compared with the ideal back-to-back case even when the optical channel impairments like chromatic dispersion (CD) and polarization mode dispersion (PMD) are severe.",
"title": ""
},
{
"docid": "973334e5704c861bc917abf5c0f4d0a1",
"text": "Today e-commerce has become crucial element to transform some of the world countries into an information society. Business to consumer (B2C) in the developing countries is not yet a normalcy as compared to the developed countries. Consumer behaviour research has shown disappointing results regarding the overall use of the Web for online shopping, despite its considerable promise as a channel for commerce. As the use of the Internet continues to grow in all aspects of daily life, there is an increasing need to better understand what trends of internet usage and to study the barriers and problem of ecommerce adoption. Hence, the purpose of this research is to define how far Technology Acceptance Model (TAM) contributed in e-commerce adoption. Data for this study was collected by the means of a survey conducted in Malaysia in 2010. A total of 611 questionnaire forms were delivered to respondents. The location of respondents was within Penang state. By studying this sample, conclusions would be drawn to generalize the interests of the population.",
"title": ""
},
{
"docid": "0297af005c837e410272ab3152942f90",
"text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.",
"title": ""
},
{
"docid": "ac044ce167d7296675ddfa1f9387c75d",
"text": "Over the years, many millimeter-wave circulator techniques have been presented, such as nonradiative dielectric and fin-line circulators. Although excellent results have been demonstrated in the literature, their proliferation in commercial devices has been hindered by complex assembly cost. This paper presents a study of substrate-integrated millimeter-wave degree-2 circulators. Although the substrate integrated-circuits technique may be applied to virtually any planar transmission medium, the one adopted in this paper is the substrate integrated waveguide (SIW). Two design configurations are possible: a planar one that is suitable for thin substrate materials and a turnstile one for thicker substrate materials. The turnstile circulator is ideal for systems where the conductor losses associated with the thin SIW cannot be tolerated. The design methodology adopted in this paper is to characterize the complex gyrator circuit as a preamble to design. This is done via a commercial finite-element package",
"title": ""
},
{
"docid": "cff5ceab3d0b181e5278688371652495",
"text": "The redesign of business processes has a huge potential in terms of reducing costs and throughput times, as well as improving customer satisfaction. Despite rapid developments in the business process management discipline during the last decade, a comprehensive overview of the options to methodologically support a team to move from as-is process insights to to-be process alternatives is lacking. As such, no safeguard exists that a systematic exploration of the full range of redesign possibilities takes place by practitioners. Consequently, many attractive redesign possibilities remain unidentified and the improvement potential of redesign initiatives is not fulfilled. This systematic literature review establishes a comprehensive methodological framework, which serves as a catalog for process improvement use cases. The framework contains an overview of all the method options regarding the generation of process improvement ideas. This is established by identifying six key methodological decision areas, e.g. the human actors who can be invited to generate these ideas or the information that can be collected prior to this act. This framework enables practitioners to compose a well-considered method to generate process improvement ideas themselves. Based on a critical evaluation of the framework, the authors also offer recommendations that support academic researchers in grounding and improving methods for generating process Accepted after two revisions by the editors of the special issue. Electronic supplementary material The online version of this article (doi:10.1007/s12599-015-0417-x) contains supplementary material, which is available to authorized users. ir. R. J. B. Vanwersch (&) Dr. ir. I. Vanderfeesten Prof. Dr. ir. P. Grefen School of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, De Lismortel, room K.3, P.O. Box 513, 5600 MB Eindhoven, The Netherlands e-mail: r.j.b.vanwersch@tue.nl Dr. K. Shahzad College of Information Technology, University of the Punjab, Lahore, Pakistan Dr. K. Vanhaecht Department of Public Health and Primary Care, KU Leuven, University of Leuven, Leuven, Belgium Dr. K. Vanhaecht Department of Quality Management, University Hospitals KU Leuven, Leuven, Belgium Prof. Dr. ir. L. Pintelon Centre for Industrial Management/Traffic and Infrastructure, KU Leuven, University of Leuven, Leuven, Belgium Prof. Dr. J. Mendling Institute for Information Business, Vienna University of Economics and Business, Vienna, Austria Prof. Dr. G. G. van Merode Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands Prof. Dr. ir. H. A. Reijers Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands Prof. Dr. ir. H. A. Reijers Department of Computer Science, VU University Amsterdam, Amsterdam, The Netherlands 123 Bus Inf Syst Eng 58(1):43–53 (2016) DOI 10.1007/s12599-015-0417-x",
"title": ""
},
{
"docid": "bb74cbb76c6efb4a030d2c5653e18842",
"text": "Two new wideband in-phase and out-of-phase balanced power dividing/combining networks are proposed in this paper. Based on matrix transformation, the differential-mode and common-mode equivalent circuits of the two wideband in-phase and out-of-phase networks can be easily deduced. A patterned ground-plane technique is used to realize the strong coupling of the shorted coupled lines for the differential mode. Two planar wideband in-phase and out-of-phase balanced networks with bandwidths of 55.3% and 64.4% for the differential mode with wideband common-mode suppression are designed and fabricated. The theoretical and measured results agree well with each other and show good in-band performances.",
"title": ""
}
] |
scidocsrr
|
061464b211e7e3ad32eece72a61b4ab4
|
Guided Music Synthesis with Variable Markov Oracle
|
[
{
"docid": "6a3bef9e3ca87f13356050f85afbb0ed",
"text": "We introduce the concept of control improvisation, the process of generating a random sequence of control events guided by a reference sequence and satisfying a given specification. We propose a formal definition of the control improvisation problem and an empirical solution applied to the domain of music. More specifically, we consider the scenario of generating a monophonic Jazz melody (solo) on a given song harmonization. The music is encoded symbolically, with the improviser generating a sequence of note symbols comprising pairs of pitches (frequencies) and discrete durations. Our approach can be decomposed roughly into two phases: a generalization phase, that learns from a training sequence (e.g., obtained from a human improviser) an automaton generating similar sequences, and a supervision phase that enforces a specification on the generated sequence, imposing constraints on the music in both the pitch and rhythmic domains. The supervision uses a measure adapted from Normalized Compression Distances (NCD) to estimate the divergence between generated melodies and the training melody and employs strategies to bound this divergence. An empirical evaluation is presented on a sample set of Jazz music.",
"title": ""
},
{
"docid": "28a6a973c60cce82020457fac75622e4",
"text": "In this paper we present a new method for indexing of audio data in terms of repeating sub-clips of variable length that we call audio factors. The new structure allows fast retrieval and recombination of sub-clips in a manner that assures continuity between splice points. The resulting structure accomplishes effectively a new method for texture synthesis, where the amount of innovation is controlled by one of the synthesis parameters. In the paper we present the new structure and describe the algorithms for efficiently computing the different indexing links. Examples of texture synthesis are provided in the paper.",
"title": ""
},
{
"docid": "b7ddc52ae897720f50d3f092d8cfbdab",
"text": "Markov chains are a well known tool to model temporal properties of many phenomena, from text structure to fluctuations in economics. Because they are easy to generate, Markovian sequences, i.e. temporal sequences having the Markov property, are also used for content generation applications such as text or music generation that imitate a given style. However, Markov sequences are traditionally generated using greedy, left-to-right algorithms. While this approach is computationally cheap, it is fundamentally unsuited for interactive control. This paper addresses the issue of generating steerable Markovian sequences. We target interactive applications such as games, in which users want to control, through simple input devices, the way the system generates a Markovian sequence, such as a text, a musical sequence or a drawing. To this aim, we propose to revisit Markov sequence generation as a branch and bound constraint satisfaction problem (CSP). We propose a CSP formulation of the basic Markovian hypothesis as elementary Markov Constraints (EMC). We propose algorithms that achieve domain-consistency for the propagators of EMCs, in an event-based implementation of CSP. We show how EMCs can be combined to estimate the global Markovian probability of a whole sequence, and accommodate for different species of Markov generation such as fixed order, variable-order, or smoothing. Such a formulation, although more costly than traditional greedy generation algorithms, yields the immense advantage of being naturally steerable, since control specifications can be represented by arbitrary additional constraints, without any modification of the generation algorithm. We illustrate our approach on simple yet combinatorial chord sequence and melody generation problems and give some performance results.",
"title": ""
}
] |
[
{
"docid": "fc2f406143699978f778919b942c8778",
"text": "Collaborative filtering (CF) aims to produce recommendations based on other users' ratings to an item. Most existing CF methods rely on the overall ratings an item has received. However, these ratings alone sometimes cannot provide sufficient information to understand users' behaviors. For example, a user giving a high rating may indicate that he loves the item as a whole; however, it is still likely that he dislikes some particular aspects at the same time. In addition, users tend to place different emphases on different aspects when reaching an overall rating. This emphasis on aspects may even vary from users to items, and has a significant impact on a user's final decision. To make a better understanding of a user' behavior and generate a more accurate recommendation, we propose a framework that incorporates both user opinions and preferences on different aspects. This framework is composed of three components, namely, an opinion mining component, an aspect weighting computing component, and a rating inference component. The first component exploits opinion mining techniques to extract and summarize opinions on multiple aspects from reviews, and generates ratings on various aspects. The second component applies a tensor factorization strategy to automatically infer weights of different aspects in reaching an overall rating. The last one infers the overall rating of an item based on both aspect ratings and weights. Experiments on two real datasets prove that our model performs better compared with the baseline methods. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7c31640e56ee1a6473558d3898fc3b52",
"text": "Climate change impacts on malaria are typically assessed with scenarios for the long-term future. Here we focus instead on the recent past (1970-2003) to address whether warmer temperatures have already increased the incidence of malaria in a highland region of East Africa. Our analyses rely on a new coupled mosquito-human model of malaria, which we use to compare projected disease levels with and without the observed temperature trend. Predicted malaria cases exhibit a highly nonlinear response to warming, with a significant increase from the 1970s to the 1990s, although typical epidemic sizes are below those observed. These findings suggest that climate change has already played an important role in the exacerbation of malaria in this region. As the observed changes in malaria are even larger than those predicted by our model, other factors previously suggested to explain all of the increase in malaria may be enhancing the impact of climate change.",
"title": ""
},
{
"docid": "b9147ef0cf66bdb7ecc007a4e3092790",
"text": "This paper is related to the use of social media for disaster management by humanitarian organizations. The past decade has seen a significant increase in the use of social media to manage humanitarian disasters. It seems, however, that it has still not been used to its full potential. In this paper, we examine the use of social media in disaster management through the lens of Attribution Theory. Attribution Theory posits that people look for the causes of events, especially unexpected and negative events. The two major characteristics of disasters are that they are unexpected and have negative outcomes/impacts. Thus, Attribution Theory may be a good fit for explaining social media adoption patterns by emergency managers. We propose a model, based on Attribution Theory, which is designed to understand the use of social media during the mitigation and preparedness phases of disaster management. We also discuss the theoretical contributions and some practical implications. This study is still in its nascent stage and is research in progress.",
"title": ""
},
{
"docid": "32f96ae1a99ed2ade25df0792d8d3779",
"text": "The success of software development depends on the proper estimation of the effort required to develop the software. Project managers require a reliable approach for software effort estimation. It is especially important during the early stages of the software development life cycle. Accurate software effort estimation is a major concern in software industries. Stochastic Gradient Boosting (SGB) is one of the machine learning techniques that helps in getting improved estimated values. SGB is used for improving the accuracy of models built on decision trees. In this paper, the main goal is to estimate the effort required to develop various software projects using the class point approach. Then, optimization of the effort parameters is achieved using the SGB technique to obtain better prediction accuracy. Further- more, performance comparisons of the models obtained using the SGB technique with the Multi Layer Perceptron and the Radial Basis Function Network are presented in order to highlight the performance achieved by each method.",
"title": ""
},
{
"docid": "5e6c24f5f3a2a3c3b0aff67e747757cb",
"text": "Traps have been used extensively to provide early warning of hidden pest infestations. To date, however, there is only one type of trap on the market in the U.K. for storage mites, namely the BT mite trap, or monitor. Laboratory studies have shown that under the test conditions (20 °C, 65% RH) the BT trap is effective at detecting mites for at least 10 days for all three species tested: Lepidoglyphus destructor, Tyrophagus longior and Acarus siro. Further tests showed that all three species reached a trap at a distance of approximately 80 cm in a 24 h period. In experiments using 100 mites of each species, and regardless of either temperature (15 or 20 °C) or relative humidity (65 or 80% RH), the most abundant species in the traps was T. longior, followed by A. siro then L. destructor. Trap catches were highest at 20 °C and 65% RH. Temperature had a greater effect on mite numbers than humidity. Tests using different densities of each mite species showed that the number of L. destructor found in/on the trap was significantly reduced when either of the other two species was dominant. It would appear that there is an interaction between L. destructor and the other two mite species which affects relative numbers found within the trap.",
"title": ""
},
{
"docid": "2c63dcef21762f875082d5471d5bf53c",
"text": "Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed.",
"title": ""
},
{
"docid": "00f88387c8539fcbed2f6ec4f953438d",
"text": "We present Masstree, a fast key-value database designed for SMP machines. Masstree keeps all data in memory. Its main data structure is a trie-like concatenation of B+-trees, each of which handles a fixed-length slice of a variable-length key. This structure effectively handles arbitrary-length possiblybinary keys, including keys with long shared prefixes. +-tree fanout was chosen to minimize total DRAM delay when descending the tree and prefetching each tree node. Lookups use optimistic concurrency control, a read-copy-update-like technique, and do not write shared data structures; updates lock only affected nodes. Logging and checkpointing provide consistency and durability. Though some of these ideas appear elsewhere, Masstree is the first to combine them. We discuss design variants and their consequences.\n On a 16-core machine, with logging enabled and queries arriving over a network, Masstree executes more than six million simple queries per second. This performance is comparable to that of memcached, a non-persistent hash table server, and higher (often much higher) than that of VoltDB, MongoDB, and Redis.",
"title": ""
},
{
"docid": "2d5368515f2ea6926e9347d971745eb9",
"text": "Let us consider a \" random graph \" r,:l,~v having n possible (labelled) vertices and N edges; in other words, let us choose at random (with equal probabilities) one of the t 1 has no isolated points) and is connected in the ordinary sense. In the present paper we consider asymptotic statistical properties of random graphs for 11++ 30. We shall deal with the following questions: 1. What is the probability of r,,. T being completely connected? 2. What is the probability that the greatest connected component (sub-graph) of r,,, s should have effectively n-k points? (k=O, 1,. . .). 3. What is the probability that rp,N should consist of exactly kf I connected components? (k = 0, 1,. + .). 4. If the edges of a graph with n vertices are chosen successively so that after each step every edge which has not yet been chosen has the same probability to be chosen as the next, and if we continue this process until the graph becomes completely connected, what is the probability that the number of necessary sfeps v will be equal to a given number I? As (partial) answers to the above questions we prove ihe following four theorems. In Theorems 1, 2, and 3 we use the notation N,= (I-&n log n+cn 1 where c is an arbitrary fixed real number ([xl denotes the integer part of x).",
"title": ""
},
{
"docid": "5e7ce5a1c80c0c5316cec0b847d4a78d",
"text": "In the vast majority of publications, it is noticeably claimed that parallel robots or manipulators are supposed to perform better than their serial counterparts. However, in practice, such mechanisms suffer from many problems, as theoretically provided potentials are difficult to exploit. This paper focuses on the issue of dynamics and control and provides a methodology to achieve accurate control for parallel manipulators in the range of high dynamics. The general case of a 6-DOF mechanism is chosen as the case study to substantiate the approach by experimental results. An important contribution is the emphasis on the structural properties of 6-DOF parallel robots to derive an appropriate and integrated control strategy that leads to the improvement of tracking performance by using only the available measurements of actuator positions. First, accurate and computationally efficient modeling of the dynamics is discussed. It is followed by presenting appropriate and optimal design of experimental parameter identification. The development of the control scheme begins with robust design of controller-observer for the single actuators. It is enhanced by a centralized feedforward dynamics compensation. Since systematic tracking errors always remain, a model-based iterative learning controller is designed to further increase the accuracy at high dynamics.",
"title": ""
},
{
"docid": "38023edc4f0d27087a3b4813db643dd0",
"text": "Flapping flight is energetically more costly than running, although it is less costly to fly a given body mass a given distance per unit time than it is for a similar mass to run the same distance per unit time. This is mainly because birds can fly faster than they can run. Oxygen transfer and transport are enhanced in migrating birds compared with those in non-migrators: at the gas-exchange regions of the lungs the effective area is greater and the diffusion distance smaller. Also, migrating birds have larger hearts and haemoglobin concentrations in the blood, and capillary density in the flight muscles tends to be higher. Species like bar-headed geese migrate at high altitudes, where the availability of oxygen is reduced and the energy cost of flapping flight increased compared with those at sea level. Physiological adaptations to these conditions include haemoglobin with a higher affinity for oxygen than that in lowland birds, a greater effective ventilation of the gas-exchange surface of the lungs and a greater capillary-to-muscle fibre ratio. Migrating birds use fatty acids as their source of energy, so they have to be transported at a sufficient rate to meet the high demand. Since fatty acids are insoluble in water, birds maintain high concentrations of fatty acid-binding proteins to transport fatty acids across the cell membrane and within the cytoplasm. The concentrations of these proteins, together with that of a key enzyme in the β-oxidation of fatty acids, increase before migration.This article is part of the themed issue 'Moving in a moving medium: new perspectives on flight'.",
"title": ""
},
{
"docid": "e34a61754ff8cfac053af5cbedadd9e0",
"text": "An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods.",
"title": ""
},
{
"docid": "885331c23b178d443ea46e814d31261a",
"text": "The huge increases in medical devices and clinical applications which generate enormous data have raised a big issue in managing, processing, and mining this massive amount of data. Indeed, traditional data warehousing frameworks can not be effective when managing the volume, variety, and velocity of current medical applications. As a result, several data warehouses face many issues over medical data and many challenges need to be addressed. New solutions have emerged and Hadoop is one of the best examples, it can be used to process these streams of medical data. However, without an efficient system design and architecture, these performances will not be significant and valuable for medical managers. In this paper, we provide a short review of the literature about research issues of traditional data warehouses and we present some important Hadoop-based data warehouses. In addition, a Hadoop-based architecture and a conceptual data model for designing medical Big Data warehouse are given. In our case study, we provide implementation detail of big data warehouse based on the proposed architecture and data model in the Apache Hadoop platform to ensure an optimal allocation of health resources.",
"title": ""
},
{
"docid": "a411f451c3afc63aa107b7cb8508275e",
"text": "Classical methods for detecting outliers deal with continuous variables. These methods are not readily applicable to categorical data, such as incorrect/correct scores (0/1) and ordered rating scale scores (e.g., 0, . . . , 4) typical of multi-item tests and questionnaires. This study proposes two definitions of outlier scores suited for categorical data. One definition combines information on outliers from scores on all the items in the test, and the other definition combines information from all pairs of item scores. For a particular item-score vector, an outlier score expresses the degree in which the item-score vector is unusual. For ten real-data sets, the distribution of each of the two outlier scores is inspected by means of Tukey’s fences and the extreme studentized deviate procedure. It is investigated whether the outliers that are identified are influential with respect to the statistical analysis performed on these data. Recommendations are given for outlier identification and accommodation in test and questionnaire data.",
"title": ""
},
{
"docid": "39fdfa5258c2cb22ed2d7f1f5b2afeaf",
"text": "Calling for research on automatic oversight for artificial intelligence systems.",
"title": ""
},
{
"docid": "39c19169ce3e38b5f6151a9881a80f8d",
"text": "In the context of learning from high-dimensional datasets, the performance of machine learning algorithms decreases and can even deteriorate in the presence of data noisiness. Feature selection is a dimension reduction process for encountering the curse of dimensionality, which can also solve the noise issue. This paper aims to describe the latest advances in feature selection. The contributions are investigated as attempts for overcoming challenges related to the feasibility, computational complexity, accuracy and reliability. This paper may allow the researcher to take a broad view of developments in feature selection.",
"title": ""
},
{
"docid": "096b0677b107609676d4d609032e1852",
"text": "Social media has deeply penetrated workplace, which has affected multiple aspects of employees' lives. This paper aims to investigate the influence of social media on employees' work performance and the underlying mechanism for how they create value at work. Based on media synchronicity and social capital theories, we propose that social media can promote work performance by stimulating trust among employees and offering a communication channel where explicit and implicit knowledge can be effectively transferred. The research model is tested using data collected from 105 Chinese software professionals. The results reveal that social media can enhance trust among employees. The direct effect on knowledge transfer was not significant but the impact was mediated by trust. Trust enhances knowledge exchange in the workplace with a stronger influence on the transfer of implicit knowledge rather than explicit knowledge. Implicit knowledge transfer was significantly related to work performance, while explicit knowledge transfer did not significantly influence work performance. The theoretical and practical contributions of this study are discussed.",
"title": ""
},
{
"docid": "36b6eb29650479d45b8b0479d6fc0371",
"text": "Cognizant of the research gap in the theorization of mobile learning, this paper conceptually explores how the theories and methodology of self-regulated learning (SRL), an active area in contemporary educational psychology, are inherently suited to address the issues originating from the defining characteristics of mobile learning: enabling student-centred, personal, and ubiquitous learning. These characteristics provide some of the conditions for learners to learn anywhere and anytime, and thus, entail learners to be motivated and to be able to self-regulate their own learning. We propose an analytic SRL model of mobile learning as a conceptual framework for understanding mobile learning, in which the notion of self-regulation as agency is at the core. The rationale behind this model is built on our recognition of the challenges in the current conceptualization of the mechanisms and processes of mobile learning, and the inherent relationship between mobile learning and SRL. We draw on work in a 3-year research project in developing and implementing a mobile learning environment in elementary science classes in Singapore to illustrate the application of SRL theories and methodology to understand and analyse mobile learning.",
"title": ""
},
{
"docid": "c74e3880a4bd7fe69f0c690fa4e4fdc4",
"text": "This paper presents a parallel real time framework for emotions and mental states extraction and recognition from video fragments of human movements. In the experimental setup human hands are tracked by evaluation of moving skin-colored objects. The tracking analysis demonstrates that acceleration and frequency characteristics of the traced objects are relevant for classification of the emotional expressiveness of human movements. The outcomes of the emotional and mental states recognition are cross-validatedwith the analysis of two independent certifiedmovement analysts (CMA’s) who use the Laban movement analysis (LMA) method. We argue that LMA based computer analysis can serve as a common language for expressing and interpreting emotional movements between robots and humans, and in that way it resembles the common coding principle between action and perception by humans and primates that is embodied by themirror neuron system. The solution is part of a larger project on interaction between a human and a humanoid robot with the aim of training social behavioral skills to autistic children with robots acting in a natural environment. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b3b96e6c1bbc2da8d548fb4b2d1072bc",
"text": "This paper reports on insider threat detection research, during which a prototype system (PRODIGAL) was developed and operated as a testbed for exploring a range of detection and analysis methods. The data and test environment, system components, and the core method of unsupervised detection of insider threat leads are presented to document this work and benefit others working in the insider threat domain. We also discuss a core set of experiments evaluating the prototype’s ability to detect both known and unknown malicious insider behaviors. The experimental results show the ability to detect a large variety of insider threat scenario instances imbedded in real data with no prior knowledge of what scenarios are present or when they occur. We report on an ensemble-based, unsupervised technique for detecting potential insider threat instances. When run over 16 months of real monitored computer usage activity augmented with independently developed and unknown but realistic, insider threat scenarios, this technique robustly achieves results within five percent of the best individual detectors identified after the fact. We discuss factors that contribute to the success of the ensemble method, such as the number and variety of unsupervised detectors and the use of prior knowledge encoded in detectors designed for specific activity patterns. Finally, the paper describes the architecture of the prototype system, the environment in which we conducted these experiments and that is in the process of being transitioned to operational users.",
"title": ""
},
{
"docid": "7ba8baf5a3e35fa181c83884c5e0a9d2",
"text": "Personal, mobile displays, such as those on mobile phones, are ubiquitous, yet for the most part, underutilized. We present results from a field experiment that investigated the effectiveness of these displays as a means for improving awareness of daily life (in our case, self-monitoring of physical activity). Twenty-eight participants in three experimental conditions used our UbiFit system for a period of three months in their day-to-day lives over the winter holiday season. Our results show, for example, that participants who had an awareness display were able to maintain their physical activity level (even during the holidays), while the level of physical activity for participants who did not have an awareness display dropped significantly. We discuss our results and their general implications for the use of everyday mobile devices as awareness displays.",
"title": ""
}
] |
scidocsrr
|
d51e4e4373fd0a31e668d52596497efc
|
DeepGRU: Deep Gesture Recognition Utility
|
[
{
"docid": "af572a43542fde321e18675213f635ae",
"text": "The representation of 3D pose plays a critical role for 3D action and gesture recognition. Rather than representing a 3D pose directly by its joint locations, in this paper, we propose a Deformable Pose Traversal Convolution Network that applies one-dimensional convolution to traverse the 3D pose for its representation. Instead of fixing the receptive field when performing traversal convolution, it optimizes the convolution kernel for each joint, by considering contextual joints with various weights. This deformable convolution better utilizes the contextual joints for action and gesture recognition and is more robust to noisy joints. Moreover, by feeding the learned pose feature to a LSTM, we perform end-to-end training that jointly optimizes 3D pose representation and temporal sequence recognition. Experiments on three benchmark datasets validate the competitive performance of our proposed method, as well as its efficiency and robustness to handle noisy joints of pose.",
"title": ""
}
] |
[
{
"docid": "92c735a70f5e6ee8ce7fd6f5c0a097c5",
"text": "In this paper, a near-optimum active rectifier is proposed to achieve well-optimized power conversion efficiency (PCE) and voltage conversion ratio (VCR) under various process, voltage, temperature (PVT) and loading conditions. The near-optimum operation includes: eliminated reverse current loss and maximized conduction time achieved by the proposed sampling-based real-time calibrations with automatic circuit-delay compensation for both on-and off-time of active diodes considering PVT variations; and power stage optimizations with adaptive sizing over a wide loading range. The design is fabricated in TSMC 65 nm process with standard I/O devices. Measurement results show more than 36% and 17% improvement in PCE and VCR, respectively, by the proposed techniques. A peak PCE of 94.8% with an 80 Ω loading, a peak VCR of 98.7% with 1 kΩ loading, and a maximum output power of 248.1 mW are achieved with 2.5 V input amplitude.",
"title": ""
},
{
"docid": "8f57603ee7ca4421e111f716e1205322",
"text": "By experiments on cells (neurons, hepatocytes, and fibroblasts) that are targets for thyroid hormones and a randomized clinical trial on iatrogenic hyperthyroidism, we validated the concept that L-carnitine is a peripheral antagonist of thyroid hormone action. In particular, L-carnitine inhibits both triiodothyronine (T3) and thyroxine (T4) entry into the cell nuclei. This is relevant because thyroid hormone action is mainly mediated by specific nuclear receptors. In the randomized trial, we showed that 2 and 4 grams per day of oral L-carnitine are capable of reversing hyperthyroid symptoms (and biochemical changes in the hyperthyroid direction) as well as preventing (or minimizing) the appearance of hyperthyroid symptoms (or biochemical changes in the hyperthyroid direction). It is noteworthy that some biochemical parameters (thyrotropin and urine hydroxyproline) were refractory to the L-carnitine inhibition of thyroid hormone action, while osteocalcin changed in the hyperthyroid direction, but with a beneficial end result on bone. A very recent clinical observation proved the usefulness of L-carnitine in the most serious form of hyperthyroidism: thyroid storm. Since hyperthyroidism impoverishes the tissue deposits of carnitine, there is a rationale for using L-carnitine at least in certain clinical settings.",
"title": ""
},
{
"docid": "11ed66cfb1a686ce46b1ad0ec6cf5d13",
"text": "OBJECTIVE\nTo evaluate a novel ultrasound measurement, the prefrontal space ratio (PFSR), in second-trimester trisomy 21 and euploid fetuses.\n\n\nMETHODS\nStored three-dimensional volumes of fetal profiles from 26 trisomy 21 fetuses and 90 euploid fetuses at 15-25 weeks' gestation were examined. A line was drawn between the leading edge of the mandible and the maxilla (MM line) and extended in front of the forehead. The ratio of the distance between the leading edge of the skull and that of the skin (d(1)) to the distance between the skin and the point where the MM line was intercepted (d(2)) was calculated (d(2)/d(1)). The distributions of PFSR in trisomy 21 and euploid fetuses were compared, and the relationship with gestational age in each group was evaluated by Spearman's rank correlation coefficient (r(s) ).\n\n\nRESULTS\nThe PFSR in trisomy 21 fetuses (mean, 0.36; range, 0-0.81) was significantly lower than in euploid fetuses (mean, 1.48; range, 0.85-2.95; P < 0.001 (Mann-Whitney U-test)). There was no significant association between PFSR and gestational age in either trisomy 21 (r(s) = 0.25; 95% CI, - 0.15 to 0.58) or euploid (r(s) = 0.06; 95% CI, - 0.15 to 0.27) fetuses.\n\n\nCONCLUSION\nThe PFSR appears to be a highly sensitive and specific marker of trisomy 21 in the second trimester of pregnancy.",
"title": ""
},
{
"docid": "c4044ab0e304c3bc5cf92995438cbe3d",
"text": "Several recent research efforts in the biometrics have focused on developing personal identification using very low-resolution imaging resulting from widely deployed surveillance cameras and mobile devices. Identification of human faces using such low-resolution imaging has shown promising results and has shown its utility for range of applications (surveillance). This paper investigates contactless identification of such low resolution (∼ 50 dpi) fingerprint images acquired using webcam. The acquired images are firstly subjected to robust preprocessing steps to extract region of interest and normalize uneven illumination. We extract localized feature information and effectively incorporate this local information into matching stage. The experimental results are presented on two session database of 156 subjects acquired over a period of 11 months and achieve average rank-one identification accuracy of 93.97%. The achieved results are highly promising to invite attention for range of applications, including surveillance, and sprung new directions for further research efforts.",
"title": ""
},
{
"docid": "6981598efd4a70f669b5abdca47b7ea1",
"text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.",
"title": ""
},
{
"docid": "4851b83b4ef6efa36777c28be8548c8d",
"text": "The finite element methodology has become a standard framework for approximating the solution to the Poisson-Boltzmann equation in many biological applications. In this article, we examine the numerical efficacy of least-squares finite element methods for the linearized form of the equations. In particular, we highlight the utility of a first-order form, noting optimality, control of the flux variables, and flexibility in the formulation, including the choice of elements. We explore the impact of weighting and the choice of elements on conditioning and adaptive refinement. In a series of numerical experiments, we compare the finite element methods when applied to the problem of computing the solvation free energy for realistic molecules of varying size.",
"title": ""
},
{
"docid": "74273502995ceaac87737d274379d7dc",
"text": "Majority of the systems designed to handle big RDF data rely on a single high-end computer dedicated to a certain RDF dataset and do not easily scale out, at the same time several clustered solution were tested and both the features and the benchmark results were unsatisfying. In this paper we describe a system designed to tackle such issues, a system that connects RDF4J and Apache HBase in order to receive an extremely scalable RDF store.",
"title": ""
},
{
"docid": "573b563cfc7eb96552a906fb9263ea6d",
"text": "Supply chain is complex today. Multi-echelon, highly disjointed, and geographically spread are some of the cornerstones of today’s supply chain. All these together with different governmental policies and human behavior make it almost impossible to probe incidents and trace events in case of supply chain disruptions. In effect, an end-to-end supply chain, from the most basic raw material to the final product in a customer’s possession, is opaque. The inherent cost involved in managing supply chain intermediaries, their reliability, traceability, and transparency further complicate the supply chain. The solution to such complicated problems lies in improving supply chain transparency. This is now possible with the concept of blockchain. The usage of blockchain in a financial transaction is well known. This paper reviews blockchain technology, which is changing the face of supply chain and bringing in transparency and authenticity. This paper first discusses the history and evolution of blockchain from the bitcoin network, and goes on to explore the protocols. The author takes a deep dive into the design of blockchain, exploring its five pillars and three-layered architecture, which enables most of the blockchains today. With the architecture, the author focuses on the applications, use cases, road map, and challenges for blockchain in the supply chain domain as well as the synergy of blockchain with enterprise applications. It analyzes the integration of the enterprise resource planning (ERP) system of the supply chain domain with blockchain. It also explores the three distinct growth areas: ERP-blockchain supply chain use cases, the middleware for connecting the blockchain with ERP, and blockchain as a service (BaaS). The paper ends with a brief conclusion and a discussion.",
"title": ""
},
{
"docid": "7b7a0b0b6a36789834c321d04c2e2f8f",
"text": "In the present paper we propose and evaluate a framework for detection and classification of plant leaf/stem diseases using image processing and neural network technique. The images of plant leaves affected by four types of diseases namely early blight, late blight, powdery-mildew and septoria has been considered for study and evaluation of feasibility of the proposed method. The color transformation structures were obtained by converting images from RGB to HSI color space. The Kmeans clustering algorithm was used to divide images into clusters for demarcation of infected area of the leaves. After clustering, the set of color and texture features viz. moment, mean, variance, contrast, correlation and entropy were extracted based on Color Co-occurrence Method (CCM). A feed forward back propagation neural network was configured and trained using extracted set of features and subsequently utilized for detection of leaf diseases. Keyword: Color Co-Occurrence Method, K-Means, Feed Forward Neural Network",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "2d718fdaecb286ef437b81d2a31383dd",
"text": "In this paper, we present a novel non-parametric polygonal approximation algorithm for digital planar curves. The proposed algorithm first selects a set of points (called cut-points) on the contour which are of very ‘high’ curvature. An optimization procedure is then applied to find adaptively the best fitting polygonal approximations for the different segments of the contour as defined by the cut-points. The optimization procedure uses one of the efficiency measures for polygonal approximation algorithms as the objective function. Our algorithm adaptively locates segments of the contour with different levels of details. The proposed algorithm follows the contour more closely where the level of details on the curve is high, while addressing noise by using suppression techniques. This makes the algorithm very robust for noisy, real-life contours having different levels of details. The proposed algorithm performs favorably when compared with other polygonal approximation algorithms using the popular shapes. In addition, the effectiveness of the algorithm is shown by measuring its performance over a large set of handwritten Arabic characters and MPEG7 CE Shape-1 Part B database. Experimental results demonstrate that the proposed algorithm is very stable and robust compared with other algorithms. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "40e3df04c4ca2bb2b11459c4dd2fcd10",
"text": "OBJECTIVES\nTo highlight the morphodynamic anatomical mechanisms that influence the results of rhinoplasty. To present the technical modalities of nasal dorsum preservation rhinoplasties. To determine the optimized respective surgical indications of the two main techniques of rhinoplasty: interruption rhinoplasty versus conservative rhinoplasty.\n\n\nMATERIALS AND METHODS\nBased on anatomical dissections and initial morphodynamic studies carried out on 100 anatomical specimens, a prospective study of a continuous series of 400 patients operated of primary reduction rhinoplasty or septo-rhinoplasty by one of authors (YS) has been undertaken over a period of ten years (1995-2005) in order to optimize the surgical management of the nasal hump. The studied parameters were: (1) surgical safety, (2) quality of early and late aesthetic result, (3) quality of the functional result, (4) ease of the technical realization of a possible secondary rhinoplasty. The other selected criteria were function of the different nasal hump morphotypes and the expressed wishes of the patients.\n\n\nRESULTS\nThe anatomical and morphodynamic studies made it possible to better understand the role of the \"M\" double-arch shape of the nose and the role of the cartilaginous buttresses not only as a function but also the anatomy and the aesthetics of the nose. It is necessary to preserve or repair the arche structures of the septo-triangular and alo-columellar sub-units. The conservative technique, whose results appear much more natural aesthetically, functionally satisfactory and durable over the long term, must be favoured in particular in man and in cases presenting a risk of collapse of the nasal valve.\n\n\nCONCLUSION\nThe rhinoplastician must be able to propose, according to the patient's wishes and in view of the results of the morphological analysis, the most adapted procedure according to his own surgical training but by supporting conservation of the osteo-cartilaginous vault whenever possible.",
"title": ""
},
{
"docid": "ab7e012ac498cf22896b0ff09d7e0d29",
"text": "This paper studies equilibrium asset pricing with liquidity risk — the risk arising from unpredictable changes in liquidity over time. It is shown that a security’s required return depends on its expected illiquidity and on the covariances of its own return and illiquidity with market return and market illiquidity. This gives rise to a liquidityadjusted capital asset pricing model. Further, if a security’s liquidity is persistent, a shock to its illiquidity results in low contemporaneous returns and high predicted future returns. Empirical evidence based on cross-sectional tests is consistent with liquidity risk being priced. We are grateful for conversations with Andrew Ang, Joseph Chen, Sergei Davydenko, Francisco Gomes, Joel Hasbrouck, Andrew Jackson, Tim Johnson, Martin Lettau, Anthony Lynch, Stefan Nagel, Dimitri Vayanos, Luis Viceira, Jeff Wurgler, and seminar participants at London Business School, New York University, the National Bureau of Economic Research (NBER) Summer Institute 2002, and the Five Star Conference 2002. We are especially indebted to Yakov Amihud for being generous with his time in guiding us through the empirical tests. All errors remain our own. Acharya is at London Business School and is a Research Affiliate of the Centre for Economic Policy Research (CEPR). Address: London Business School, Regent’s Park, London NW1 4SA, UK. Phone: +44 (0)20 7262 5050 x 3535. Fax: +44 (0)20 7724 3317. Email: vacharya@london.edu. Web: http://www.london.edu/faculty/vacharya Pedersen is at the Stern School of Business, New York University, 44 West Fourth Street, Suite 9-190, New York, NY 10012-1126. Phone: (212) 998-0359. Fax: (212) 995-4233. Email: lpederse@stern.nyu.edu. Web: http://www.stern.nyu.edu/∼lpederse/",
"title": ""
},
{
"docid": "1e139fa9673f83ac619a5da53391b1ef",
"text": "In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.",
"title": ""
},
{
"docid": "6037693a098f8f2713b2316c75447a50",
"text": "Presently, monoclonal antibodies (mAbs) therapeutics have big global sales and are starting to receive competition from biosimilars. We previously reported that the nano-surface and molecular-orientation limited (nSMOL) proteolysis which is optimal method for bioanalysis of antibody drugs in plasma. The nSMOL is a Fab-selective limited proteolysis, which utilize the difference of protease nanoparticle diameter (200 nm) and antibody resin pore diameter (100 nm). In this report, we have demonstrated that the full validation for chimeric antibody Rituximab bioanalysis in human plasma using nSMOL proteolysis. The immunoglobulin fraction was collected using Protein A resin from plasma, which was then followed by the nSMOL proteolysis using the FG nanoparticle-immobilized trypsin under a nondenaturing condition at 50°C for 6 h. After removal of resin and nanoparticles, Rituximab signature peptides (GLEWIGAIYPGNGDTSYNQK, ASGYTFTSYNMHWVK, and FSGSGSGTSYSLTISR) including complementarity-determining region (CDR) and internal standard P14R were simultaneously quantified by multiple reaction monitoring (MRM). This quantification of Rituximab using nSMOL proteolysis showed lower limit of quantification (LLOQ) of 0.586 µg/mL and linearity of 0.586 to 300 µg/mL. The intra- and inter-assay precision of LLOQ, low quality control (LQC), middle quality control (MQC), and high quality control (HQC) was 5.45-12.9% and 11.8, 5.77-8.84% and 9.22, 2.58-6.39 and 6.48%, and 2.69-7.29 and 4.77%, respectively. These results indicate that nSMOL can be applied to clinical pharmacokinetics study of Rituximab, based on the precise analysis.",
"title": ""
},
{
"docid": "679eb46c45998897b4f8e641530f44a7",
"text": "Workers in hazardous environments such as mining are constantly exposed to the health and safety hazards of dynamic and unpredictable conditions. One approach to enable them to manage these hazards is to provide them with situational awareness: real-time data (environmental, physiological, and physical location data) obtained from wireless, wearable, smart sensor technologies deployed at the work area. The scope of this approach is limited to managing the hazards of the immediate work area for prevention purposes; it does not include technologies needed after a disaster. Three critical technologies emerge and converge to support this technical approach: smart-wearable sensors, wireless sensor networks, and low-power embedded computing. The major focus of this report is on smart sensors and wireless sensor networks. Wireless networks form the infrastructure to support the realization of situational awareness; therefore, there is a significant focus on wireless networks. Lastly, the “Future Research” section pulls together the three critical technologies by proposing applications that are relevant to mining. The applications are injured miner (person-down) detection; a wireless, wearable remote viewer; and an ultrawide band smart environment that enables localization and tracking of humans and resources. The smart environment could provide location data, physiological data, and communications (video, photos, graphical images, audio, and text messages). Electrical engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA. President, The Designer-III Co., Franklin, PA. General engineer, Pittsburgh Research Laboratory (now with the National Personal Protective Technology Laboratory), National Institute for Occupational Safety and Health, Pittsburgh, PA. Supervisory general engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA.",
"title": ""
},
{
"docid": "c9d3def588f5f3dc95955635ebaa0d3d",
"text": "In this paper we propose a novel computer vision method for classifying human facial expression from low resolution images. Our method uses the bag of words representation. It extracts dense SIFT descriptors either from the whole image or from a spatial pyramid that divides the image into increasingly fine sub-regions. Then, it represents images as normalized (spatial) presence vectors of visual words from a codebook obtained through clustering image descriptors. Linear kernels are built for several choices of spatial presence vectors, and combined into weighted sums for multiple kernel learning (MKL). For machine learning, the method makes use of multi-class one-versus-all SVM on the MKL kernel computed using this representation, but with an important twist, the learning is local, as opposed to global – in the sense that, for each face with an unknown label, a set of neighbors is selected to build a local classification model, which is eventually used to classify only that particular face. Empirical results indicate that the use of presence vectors, local learning and spatial information improve recognition performance together by more than 5%. Finally, the proposed model ranked fourth in the Facial Expression Recognition Challenge, with an accuracy of 67.484% on the final test set. ICML 2013 Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. Copyright 2013 by the author(s).",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "3dc1598f8653c540e6e61daf2994b8ed",
"text": "Labeled graphs provide a natural way of representing entities, relationships and structures within real datasets such as knowledge graphs and protein interactions. Applications such as question answering, semantic search, and motif discovery entail efficient approaches for subgraph matching involving both label and structural similarities. Given the NP-completeness of subgraph isomorphism and the presence of noise, approximate graph matching techniques are required to handle queries in a robust and real-time manner. This paper presents a novel technique to characterize the subgraph similarity based on statistical significance captured by chi-square statistic. The statistical significance model takes into account the background structure and label distribution in the neighborhood of vertices to obtain the best matching subgraph and, therefore, robustly handles partial label and structural mismatches. Based on the model, we propose two algorithms, VELSET and NAGA, that, given a query graph, return the top-k most similar subgraphs from a (large) database graph. While VELSET is more accurate and robust to noise, NAGA is faster and more applicable for scenarios with low label noise. Experiments on large real-life graph datasets depict significant improvements in terms of accuracy and running time in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "340a506b8968efa5f775c26fd5841599",
"text": "One of the teaching methods available to teachers in the ‘andragogic’ model of teaching is the method of ‘Socratic Seminars’. This is a teacher-directed form of instruction in which questions are used as the sole method of teaching, placing students in the position of having to recognise the limits of their knowledge, and hopefully, motivating them to learn. This paper aims at initiating the discussion on the strengths and drawbacks of this method. Based on empirical research, the paper suggests that the Socratic method seems to be a very effective method for teaching adult learners, but should be used with caution depending on the personality of the learners.",
"title": ""
}
] |
scidocsrr
|
0b8322afdab38bd3f9bd25f1070b8054
|
Minimal Intelligence Agents for Bargaining Behaviors in Market Based Environments
|
[
{
"docid": "9eba7766cfd92de0593937defda6ce64",
"text": "A basic classifier system, ZCS, is presented that keeps much of Holland's original framework but simplifies it to increase understandability and performance. ZCS's relation to Q-learning is brought out, and their performances compared in environments of two difficulty levels. Extensions to ZCS are proposed for temporary memory, better action selection, more efficient use of the genetic algorithm, and more general classifier representation.",
"title": ""
}
] |
[
{
"docid": "d5868da2fedb7498a9d6454ed939408c",
"text": "over concrete thinking Understand that virtual objects are computer generated, and they do not need to obey physical laws",
"title": ""
},
{
"docid": "2c2942905010e71cda5f8b0f41cf2dd0",
"text": "1 Focus and anaphoric destressing Consider a pronunciation of (1) with prominence on the capitalized noun phrases. In terms of a relational notion of prominence, the subject NP she] is prominent within the clause S she beats me], and NP Sue] is prominent within the clause S Sue beats me]. This prosody seems to have the pragmatic function of putting the two clauses into opposition, with prominences indicating where they diier, and prosodic reduction of the remaining parts indicating where the clauses are invariant. (1) She beats me more often than Sue beats me Car84], Roc86] and Roo92] propose theories of focus interpretation which formalize the idea just outlined. Under my assumptions, the prominences are the correlates of a syntactic focus features on the two prominent NPs, written as F subscripts. Further, the grammatical representation of (1) includes operators which interpret the focus features at the level of the minimal dominating S nodes. In the logical form below, each focus feature is interpreted by an operator written .",
"title": ""
},
{
"docid": "9042faed1193b7bc4c31f2bc239c5d89",
"text": "Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. This paper presents a comparative study of four classification algorithms for static hand gesture classification using two different hand features data sets. The approach used consists in identifying hand pixels in each frame, extract features and use those features to recognize a specific hand pose. The results obtained proved that the ANN had a very good performance and that the feature selection and data preparation is an important phase in the all process, when using low-resolution images like the ones obtained with the camera in the current work.",
"title": ""
},
{
"docid": "df82e5936300b495502cfff835a1b8e1",
"text": "We present WISER, a new semantic search engine for expert finding in academia. Our system is unsupervised and it jointly combines classical language modeling techniques, based on text evidences, with the Wikipedia Knowledge Graph, via entity linking. WISER indexes each academic author through a novel profiling technique which models her expertise with a small, labeled and weighted graph drawn from Wikipedia. Nodes in this graph are the Wikipedia entities mentioned in the author’s publications, whereas the weighted edges express the semantic relatedness among these entities computed via textual and graph-based relatedness functions. Every node is also labeled with a relevance score which models the pertinence of the corresponding entity to author’s expertise, and is computed by means of a proper random-walk calculation over that graph; and with a latent vector representation which is learned via entity and other kinds of structural embeddings derived from Wikipedia. At query time, experts are retrieved by combining classic document-centric approaches, which exploit the occurrences of query terms in the author’s documents, with a novel set of profile-centric scoring strategies, which compute the semantic relatedness between the author’s expertise and the query topic via the above graph-based profiles. The effectiveness of our system is established over a largescale experimental test on a standard dataset for this task. We show that WISER achieves better performance than all the other competitors, thus proving the effectiveness of modelling author’s profile via our “semantic” graph of entities. Finally, we comment on the use of WISER for indexing and profiling the whole research community within the University of Pisa, and its application to technology transfer in our University.",
"title": ""
},
{
"docid": "4627d8e86bec798979962847523cc7e0",
"text": "Consuming news over online media has witnessed rapid growth in recent years, especially with the increasing popularity of social media. However, the ease and speed with which users can access and share information online facilitated the dissemination of false or unverified information. One way of assessing the credibility of online news stories is by examining the attached images. These images could be fake, manipulated or not belonging to the context of the accompanying news story. Previous attempts to news verification provided the user with a set of related images for manual inspection. In this work, we present a semi-automatic approach to assist news-consumers in instantaneously assessing the credibility of information in hypertext news articles by means of meta-data and feature analysis of images in the articles. In the first phase, we use a hybrid approach including image and text clustering techniques for checking the authenticity of an image. In the second phase, we use a hierarchical feature analysis technique for checking the alteration in an image, where different sets of features, such as edges and SURF, are used. In contrast to recently reported manual news verification, our presented work shows a quantitative measurement on a custom dataset. Results revealed an accuracy of 72.7% for checking the authenticity of attached images with a dataset of 55 articles. Finding alterations in images resulted in an accuracy of 88% for a dataset of 50 images.",
"title": ""
},
{
"docid": "54e7bf2aa21a539f5a0ddcfd0bbc8be1",
"text": "We consider the problem of grasping concave objects, i.e., objects whose surface includes regions with negative curvature. When a multifingered hand is used to restrain these objects, these areas can be advantageously used to determine grasps capable of more robustly resisting to external disturbance wrenches. We propose a new grasp quality metric specifically suited for this case, and we use it to inform a grasp planner searching the space of possible grasps. Our findings are validated both in simulation and on a real robot system executing a bin picking task. Experimental validation shows that our method is more effective than those not explicitly considering negative curvature.",
"title": ""
},
{
"docid": "b8a681b6c928d8b84fa5f30154d5af85",
"text": "Medicine relies on the use of pharmacologically active agents (drugs) to manage and treat disease. However, drugs are not inherently effective; the benefit of a drug is directly related to the manner by which it is administered or delivered. Drug delivery can affect drug pharmacokinetics, absorption, distribution, metabolism, duration of therapeutic effect, excretion, and toxicity. As new therapeutics (e.g., biologics) are being developed, there is an accompanying need for improved chemistries and materials to deliver them to the target site in the body, at a therapeutic concentration, and for the required period of time. In this Perspective, we provide an historical overview of drug delivery and controlled release followed by highlights of four emerging areas in the field of drug delivery: systemic RNA delivery, drug delivery for localized therapy, oral drug delivery systems, and biologic drug delivery systems. In each case, we present the barriers to effective drug delivery as well as chemical and materials advances that are enabling the field to overcome these hurdles for clinical impact.",
"title": ""
},
{
"docid": "53e8333b3e4e9874449492852d948ea2",
"text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.",
"title": ""
},
{
"docid": "16832bc2740773facde956fc1b524d28",
"text": "Diffractive Optically Variable Image Devices (DOVIDs) are popular security features used to protect security documents such as banknotes, ID cards, passports, etc. Checking authenticity of these security features on both user as well as forensic level remains a challenging task, requiring sophisticated hardware tools and expert knowledge. Recently, we proposed a technique exploiting a large-scale photometric behavior of DOVIDs in order to discriminate denominations and detect counterfeits. Here we investigate invariance properties of the proposed method and demonstrate its robustness against various common perturbations, which may have negative impact on the acquisition quality in practice. Presented results show a great potential of this approach primarily for security and forensic purposes, but also for other applications, where automated inspection of DOVIDs is of interest.",
"title": ""
},
{
"docid": "f38b92464776748a919342aae74e460c",
"text": "Despite the notable progress in physically-based rendering, there is still a long way to go before we can automatically generate predictable images of biological materials. In this paper, we address an open problem in this area, namely the spectral simulation of light interaction with human skin. We propose a novel biophysicallybased model that accounts for all components of light propagation in skin tissues, namely surface reflectance, subsurface reflectance and transmittance, and the biological mechanisms of light absorption by pigments in these tissues. The model is controlled by biologically meaningful parameters, and its formulation, based on standard Monte Carlo techniques, enables its straightforward incorporation into realistic image synthesis frameworks. Besides its biophysically-based nature, the key difference between the proposed model and the existing skin models is its comprehensiveness, i.e., it computes both spectral (reflectance and transmittance) and scattering (bidirectional surface-scattering distribution function) quantities for skin specimens. In order to assess the predictability of our simulations, we evaluate their accuracy by comparing results from the model with actual skin measured data. We also present computer generated images to illustrate the flexibility of the proposed model with respect to variations in the biological input data, and its applicability not only in the predictive image synthesis of different skin tones, but also in the spectral simulation of medical conditions.",
"title": ""
},
{
"docid": "228cd0696e0da6f18a22aa72f009f520",
"text": "Modern Convolutional Neural Networks (CNN) are extremely powerful on a range of computer vision tasks. However, their performance may degrade when the data is characterised by large intra-class variability caused by spatial transformations. The Spatial Transformer Network (STN) is currently the method of choice for providing CNNs the ability to remove those transformations and improve performance in an end-to-end learning framework. In this paper, we propose Densely Fused Spatial Transformer Network (DeSTNet), which, to our best knowledge, is the first dense fusion pattern for combining multiple STNs. Specifically, we show how changing the connectivity pattern of multiple STNs from sequential to dense leads to more powerful alignment modules. Extensive experiments on three benchmarks namely, MNIST, GTSRB, and IDocDB show that the proposed technique outperforms related state-of-the-art methods (i.e., STNs and CSTNs) both in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "f3599d23a21ca906e615025ac3715131",
"text": "This literature review synthesized the existing research on cloud computing from a business perspective by investigating 60 sources and integrates their results in order to offer an overview about the existing body of knowledge. Using an established framework our results are structured according to the four dimensions following: cloud computing characteristics, adoption determinants, governance mechanisms, and business impact. This work reveals a shifting focus from technological aspects to a broader understanding of cloud computing as a new IT delivery model. There is a growing consensus about its characteristics and design principles. Unfortunately, research on factors driving or inhibiting the adoption of cloud services, as well as research investigating its business impact empirically, is still limited. This may be attributed to cloud computing being a rather recent research topic. Research on structures, processes and employee qualification to govern cloud services is at an early stage as well.",
"title": ""
},
{
"docid": "e834006d59eec7d8851c78a2b57998b1",
"text": "We present a framework for text simplification based on applying transformation rules to a typed dependency representation produced by the Stanford parser. We test two approaches to regeneration from typed dependencies: (a) gen-light, where the transformed dependency graphs are linearised using the word order and morphology of the original sentence, with any changes coded into the transformation rules, and (b)gen-heavy, where the Stanford dependencies are reduced to a DSyntS representation and sentences are generating formally using the RealPro surface realiser. The main contribution of this paper is to compare the robustness of these approaches in the presence of parsing errors, using both a single parse and an n-best parse setting in an overgenerate and rank approach. We find that the gen-light approach is robust to parser error, particularly in the n-best parse setting. On the other hand, parsing errors cause the realiser in the genheavy approach to order words and phrases in ways that are disliked by our evaluators.",
"title": ""
},
{
"docid": "8c9155ce72bc3ba11bd4680d46ad69b5",
"text": "Many theorists assume that the cognitive system is composed of a collection of encapsulated processing components or modules, each dedicated to performing a particular cognitive function. On this view, selective impairments of cognitive tasks following brain damage, as evidenced by double dissociations, are naturally interpreted in terms of the loss of particular processing components. By contrast, the current investigation examines in detail a double dissociation between concrete and abstract work reading after damage to a connectionist network that pronounces words via meaning and yet has no separable components (Plaut & Shallice, 1993). The functional specialization in the network that gives rise to the double dissociation is not transparently related to the network's structure, as modular theories assume. Furthermore, a consideration of the distribution of effects across quantitatively equivalent individual lesions in the network raises specific concerns about the interpretation of single-case studies. The findings underscore the necessity of relating neuropsychological data to cognitive theories in the context of specific computational assumptions about how the cognitive system operates normally and after damage.",
"title": ""
},
{
"docid": "f47ba00cf0ca7e5c88e20785c1fd3859",
"text": "Photovoltaic maximum power point tracker (MPPT) systems are commonly employed to maximize the photovoltaic output power, since it is strongly affected in accordance to the incident solar radiation, surface temperature and load-type changes. Basically, a MPPT system consists on a dc-dc converter (hardware) controlled by a tracking algorithm (software) and the combination of both, hardware and software, defines the tracking efficiency. This paper shows that even when the most accurate algorithm is employed, the maximum power point cannot be found, since its imposition as operation point depends on the dc-dc converter static feature and the load-type connected to the system output. For validating the concept, the main dc-dc converters, i.e., Boost, Buck-Boost, Cuk, SEPIC and Zeta are analyzed considering two load-types: resistive voltage regulated dc bus. Simulation and experimental results are included for validating the theoretical analysis.",
"title": ""
},
{
"docid": "7da8ca3c0c60de80e71c8c4e44f2e777",
"text": "We describe the first release of our corpus of 97 million Twitter posts. We believe that this data will prove valuable to researchers working in social media, natural language processing, large-scale data processing, and similar areas.",
"title": ""
},
{
"docid": "36c73f8dd9940b2071ad55ae1dd83c27",
"text": "Current music recommender systems rely on techniques like collaborative filtering on user-provided information in order to generate relevant recommendations based upon users’ music collections or listening habits. In this paper, we examine whether better recommendations can be obtained by taking into account the music preferences of the user’s social contacts. We assume that music is naturally diffused through the social network of its listeners, and that we can propagate automatic recommendations in the same way through the network. In order to test this statement, we developed a music recommender application called Starnet on a Social Networking Service. It generated recommendations based either on positive ratings of friends (social recommendations), positive ratings of others in the network (nonsocial recommendations), or not based on ratings (random recommendations). The user responses to each type of recommendation indicate that social recommendations are better than non-social recommendations, which are in turn better than random recommendations. Likewise, the discovery of novel and relevant music is more likely via social recommendations than non-social. Social shuffle recommendations enable people to discover music through a serendipitous process powered by human relationships and tastes, exploiting the user’s social network to share cultural experiences.",
"title": ""
},
{
"docid": "ef1ff363769f0b206222d6e14fda95d5",
"text": "In this paper, we propose a novel benchmark for evaluating local image descriptors. We demonstrate that the existing datasets and evaluation protocols do not specify unambiguously all aspects of evaluation, leading to ambiguities and inconsistencies in results reported in the literature. Furthermore, these datasets are nearly saturated due to the recent improvements in local descriptors obtained by learning them from large annotated datasets. Therefore, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and classification. This allows for more realistic, and thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors can boost their performance to the level of deep learning based descriptors within a realistic benchmarks evaluation.",
"title": ""
},
{
"docid": "03a036bea8fac6b1dfa7d9a4783eef66",
"text": "Face recognition from the real data, capture images, sensor images and database images is challenging problem due to the wide variation of face appearances, illumination effect and the complexity of the image background. Face recognition is one of the most effective and relevant applications of image processing and biometric systems. In this paper we are discussing the face recognition methods, algorithms proposed by many researchers using artificial neural networks (ANN) which have been used in the field of image processing and pattern recognition. How ANN will used for the face recognition system and how it is effective than another methods will also discuss in this paper. There are many ANN proposed methods which give overview face recognition using ANN. Therefore, this research includes a general review of face detection studies and systems which based on different ANN approaches and algorithms. The strengths and limitations of these literature studies and systems were included, and also the performance analysis of different ANN approach and algorithm is analysing in this research study.",
"title": ""
},
{
"docid": "bb9f86e800e3f00bf7b34be85d846ff0",
"text": "This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.